Hi All,
I wanted to discuss few implementation details regarding the GTE module and
wanted to know your feedback on certain aspects as below.
==================
GTE introductions:
==================
Nvidia Tegra SoCs have generic timestamping engine (GTE) hardware module which
can monitor SoC signals like IRQ lines and GPIO lines for state change, upon
detecting the change, it can timestamp and store in its internal hardware FIFO.
The advantage of the GTE module can be realized in applications like robotics
or autonomous vehicle where it can help record events with precise timestamp.
The GTE module however complicates the thing for GPIO monitoring compare to
IRQ lines as it has dependency on GPIO controller and that is where I will
probably will need your feedback to figure few things out before sending the
patches for the review. The below is the rough sequence to enable the hw
timestamp for a given signal using GTE module to put things into perspective.
============
For GPIO:
============
1. GPIO has to be configured as input and IRQ must be enabled.
2. Ask GPIO controller driver to set corresponding timestamp bit in the
specified GPIO config register.
3. Translate GPIO specified by the client to its internal bitmap.
3.a For example, If client specifies GPIO line 31, it could be bit 13 of GTE
register.
4. Set internal bits to enable monitoring in GTE module
5. Additionally GTE driver can open up lanes for the user space application
as a client and can send timestamping events directly to the application.
============
For IRQ:
============
x. Client sends IRQ line number to GTE driver
y. There is no need to translate as there is one to one correspondence with its
internal bitmap, for example, IRQ line 31 will be bit 31 of the GTE internal
register.
z. Set the required bits.
====================================================================
Doubts (specifically for the bullet 1,2,3 from GPIO section above):
====================================================================
b. Should GTE driver expect its client to send GPIO number as per the GPIO
controller/framework numbering/namespace scheme?
b.1 The possible issues with this approach are:
b.1.1 It hast to make of_find_gpiochip_by_node function public which GTE driver
can use to confirm GPIO number that client sent is indeed belongs to the
controller which supports the timestamp functions as not all the GPIO
controllers support it.
b.1.2 GTE driver will need GPIO controller node either passed through its own
device tree or through some other methods (if at all exist) to
facilitate b.1.1
c. How GTE driver can talk to GPIO framework regarding bullet 2?
c.1 If it is through some callbacks then have to add "timestamp_control" like
function hook in the gpio framework structure. This is under assumption
bullet b is GPIO numbering scheme, we can then pass the same GPIO number to
this hook to enable timestamp bit in GPIO controller.
d GPIO logical numbering happens at the run time so GTE driver has to take
care b.1.1, b.1.2 and c.1.
e. Using above b.1.1, b.1.2 and c.1, GTE module can do bullet 1 and 2 for its
clients or at least bullet 2.
f. The other alternative is to have GTE its own GPIO numbering for its clients.
f.1 This approach completely decouples GPIO controller and GTE, where client
will be responsible for bullet 1 and gpio controller driver will be
responsible for the bullet 2 and possibly set timestamp bit of all the GPIOs
it supports during probe as real timestamping starts anyway after GTE driver
programs its corresponding registers so can be considered harmless.
f.2. I am more leaning towards this approach.
===============================================
Doubts regarding the place it in the kernel:
===============================================
g. Does GTE deserve its own subsystem or should it be consumed as part of some
other subsystems?
g.1 GTE GPIO timestamp monitoring comes very close to what we already have in
the gpiolib, more specifically lineevent part. Having said that however if
I were to add GTE inside GPIO framework "somehow", it will require
non-trivial gpio framework changes and at the same time it will make GTE
module fragmented since GTE also timestamps other signals besides GPIO like
IRQ as mentioned previously.
h. I am more leaning towards having its own subsystem.
Best Regards,
Dipen Patel
Hi Dipen,
thanks for your mail!
I involved some other kernel people to get some discussion.
I think Kent Gibson can be of great help because he is using
GPIOs with high precision.
We actually discussed this a bit when adding support for
realtime timestamps.
On Wed, Mar 17, 2021 at 11:29 PM Dipen Patel <[email protected]> wrote:
> Nvidia Tegra SoCs have generic timestamping engine (GTE) hardware module which
> can monitor SoC signals like IRQ lines and GPIO lines for state change, upon
> detecting the change, it can timestamp and store in its internal hardware FIFO.
> The advantage of the GTE module can be realized in applications like robotics
> or autonomous vehicle where it can help record events with precise timestamp.
That sounds very useful.
Certainly the kernel shall be able to handle this.
> ============
> For GPIO:
> ============
> 1. GPIO has to be configured as input and IRQ must be enabled.
> 2. Ask GPIO controller driver to set corresponding timestamp bit in the
> specified GPIO config register.
> 3. Translate GPIO specified by the client to its internal bitmap.
> 3.a For example, If client specifies GPIO line 31, it could be bit 13 of GTE
> register.
> 4. Set internal bits to enable monitoring in GTE module
> 5. Additionally GTE driver can open up lanes for the user space application
> as a client and can send timestamping events directly to the application.
I have some concerns:
1. GPIO should for all professional applications be used with the character
device /dev/gpiochipN, under no circumstances shall the old sysfs
ABI be used for this. In this case it is necessary because the
character device provides events in a FIFO to userspace, which is
what we need.
The timestamp provided to userspace is an opaque 64bit
unsigned value. I suppose we assume it is monotonic but
you can actually augment the semantics for your specific
stamp, as long as 64 bits is gonna work.
2. The timestamp for the chardev is currently obtained in
drivers/gpio/gpiolib-cdev.c like this:
static u64 line_event_timestamp(struct line *line)
{
if (test_bit(FLAG_EVENT_CLOCK_REALTIME, &line->desc->flags))
return ktime_get_real_ns();
return ktime_get_ns();
}
What you want to do is to add a new flag for hardware timestamps
and use that if available. FLAG_EVENT_CLOCK_HARDWARE?
FLAG_EVENT_CLOCK_NATIVE?
Then you need to figure out a mechanism so we can obtain
the right timestamp from the hardware event right here,
you can hook into the GPIO driver if need be, we can
figure out the gpio_chip for a certain line for sure.
So you first need to augment the userspace
ABI and the character device code to add this. See
commit 26d060e47e25f2c715a1b2c48fea391f67907a30
"gpiolib: cdev: allow edge event timestamps to be configured as REALTIME"
by Kent Gibson to see what needs to be done.
3. Also patch tools/gpio/gpio-event-mon.c to support this flag and use that
for prototyping and proof of concept.
> ============
> For IRQ:
> ============
Marc Zyngier and/or Thomas Gleixner know this stuff.
It does make sense to add some infrastructure so that GPIO events
and IRQs can use the same timestamping hardware.
And certainly you will also want to use this timestamp for
IIO devices? If it is just GPIOs and IRQs today, it will be
gyroscopes and accelerometers tomorrow, am I right?
Yours,
Linus Walleij
On Sat, Mar 20, 2021 at 12:56 PM Linus Walleij <[email protected]> wrote:
>
> Hi Dipen,
>
> thanks for your mail!
>
> I involved some other kernel people to get some discussion.
> I think Kent Gibson can be of great help because he is using
> GPIOs with high precision.
>
> We actually discussed this a bit when adding support for
> realtime timestamps.
Adding Richard Cochran as well, for drivers/ptp/, he may be able to
identify whether this should be integrated into that framework in some
form.
fullquote below
> On Wed, Mar 17, 2021 at 11:29 PM Dipen Patel <[email protected]> wrote:
>
> > Nvidia Tegra SoCs have generic timestamping engine (GTE) hardware module which
> > can monitor SoC signals like IRQ lines and GPIO lines for state change, upon
> > detecting the change, it can timestamp and store in its internal hardware FIFO.
> > The advantage of the GTE module can be realized in applications like robotics
> > or autonomous vehicle where it can help record events with precise timestamp.
>
> That sounds very useful.
>
> Certainly the kernel shall be able to handle this.
>
> > ============
> > For GPIO:
> > ============
> > 1. GPIO has to be configured as input and IRQ must be enabled.
> > 2. Ask GPIO controller driver to set corresponding timestamp bit in the
> > specified GPIO config register.
> > 3. Translate GPIO specified by the client to its internal bitmap.
> > 3.a For example, If client specifies GPIO line 31, it could be bit 13 of GTE
> > register.
> > 4. Set internal bits to enable monitoring in GTE module
> > 5. Additionally GTE driver can open up lanes for the user space application
> > as a client and can send timestamping events directly to the application.
>
> I have some concerns:
>
> 1. GPIO should for all professional applications be used with the character
> device /dev/gpiochipN, under no circumstances shall the old sysfs
> ABI be used for this. In this case it is necessary because the
> character device provides events in a FIFO to userspace, which is
> what we need.
>
> The timestamp provided to userspace is an opaque 64bit
> unsigned value. I suppose we assume it is monotonic but
> you can actually augment the semantics for your specific
> stamp, as long as 64 bits is gonna work.
>
> 2. The timestamp for the chardev is currently obtained in
> drivers/gpio/gpiolib-cdev.c like this:
>
> static u64 line_event_timestamp(struct line *line)
> {
> if (test_bit(FLAG_EVENT_CLOCK_REALTIME, &line->desc->flags))
> return ktime_get_real_ns();
>
> return ktime_get_ns();
> }
>
> What you want to do is to add a new flag for hardware timestamps
> and use that if available. FLAG_EVENT_CLOCK_HARDWARE?
> FLAG_EVENT_CLOCK_NATIVE?
>
> Then you need to figure out a mechanism so we can obtain
> the right timestamp from the hardware event right here,
> you can hook into the GPIO driver if need be, we can
> figure out the gpio_chip for a certain line for sure.
>
> So you first need to augment the userspace
> ABI and the character device code to add this. See
> commit 26d060e47e25f2c715a1b2c48fea391f67907a30
> "gpiolib: cdev: allow edge event timestamps to be configured as REALTIME"
> by Kent Gibson to see what needs to be done.
>
> 3. Also patch tools/gpio/gpio-event-mon.c to support this flag and use that
> for prototyping and proof of concept.
>
> > ============
> > For IRQ:
> > ============
>
> Marc Zyngier and/or Thomas Gleixner know this stuff.
>
> It does make sense to add some infrastructure so that GPIO events
> and IRQs can use the same timestamping hardware.
>
> And certainly you will also want to use this timestamp for
> IIO devices? If it is just GPIOs and IRQs today, it will be
> gyroscopes and accelerometers tomorrow, am I right?
>
> Yours,
> Linus Walleij
On Sat, Mar 20, 2021 at 01:44:20PM +0100, Arnd Bergmann wrote:
> Adding Richard Cochran as well, for drivers/ptp/, he may be able to
> identify whether this should be integrated into that framework in some
> form.
I'm not familiar with the GTE, but it sounds like it is a (free
running?) clock with time stamping inputs. If so, then it could
expose a PHC. That gets you functionality:
- clock_gettime() and friends
- comparison ioctl between GTE clock and CLOCK_REALTIME
- time stamping channels with programmable input selection
The mentioned applications (robotics and autonomous vehicle, so near
and dear to my heart) surely already use the PHC API for dealing with
network and system time sources, and so exposing the GTE as a PHC
means that user space programs will have a consistent API.
[ The only drawback I can see is the naming of the C language
identifiers in include/uapi/linux/ptp_clock.h. If that bothers
people, then these can be changed to something more generic while
keeping compatibility aliases. ]
Thanks,
Richard
On Sat, Mar 20, 2021 at 12:56:36PM +0100, Linus Walleij wrote:
> Hi Dipen,
>
> thanks for your mail!
>
> I involved some other kernel people to get some discussion.
> I think Kent Gibson can be of great help because he is using
> GPIOs with high precision.
>
Actually I just extended the cdev uAPI to provide the REALTIME option,
which was the event clock until we changed to MONOTONIC in Linux 5.7,
as there were some users that were requiring the REALTIME clock.
> We actually discussed this a bit when adding support for
> realtime timestamps.
>
> On Wed, Mar 17, 2021 at 11:29 PM Dipen Patel <[email protected]> wrote:
>
> > Nvidia Tegra SoCs have generic timestamping engine (GTE) hardware module which
> > can monitor SoC signals like IRQ lines and GPIO lines for state change, upon
> > detecting the change, it can timestamp and store in its internal hardware FIFO.
> > The advantage of the GTE module can be realized in applications like robotics
> > or autonomous vehicle where it can help record events with precise timestamp.
>
> That sounds very useful.
>
Indeed - it could remove the latency and jitter that results from
timestamping events in the IRQ handler.
> Certainly the kernel shall be able to handle this.
>
> > ============
> > For GPIO:
> > ============
> > 1. GPIO has to be configured as input and IRQ must be enabled.
> > 2. Ask GPIO controller driver to set corresponding timestamp bit in the
> > specified GPIO config register.
> > 3. Translate GPIO specified by the client to its internal bitmap.
> > 3.a For example, If client specifies GPIO line 31, it could be bit 13 of GTE
> > register.
> > 4. Set internal bits to enable monitoring in GTE module
> > 5. Additionally GTE driver can open up lanes for the user space application
> > as a client and can send timestamping events directly to the application.
>
> I have some concerns:
>
> 1. GPIO should for all professional applications be used with the character
> device /dev/gpiochipN, under no circumstances shall the old sysfs
> ABI be used for this. In this case it is necessary because the
> character device provides events in a FIFO to userspace, which is
> what we need.
>
The cdev uAPI would certainly be the most sensible place to expose
this to userspace - its line events being a direct analog to what the GTE
provides.
> The timestamp provided to userspace is an opaque 64bit
> unsigned value. I suppose we assume it is monotonic but
> you can actually augment the semantics for your specific
> stamp, as long as 64 bits is gonna work.
>
> 2. The timestamp for the chardev is currently obtained in
> drivers/gpio/gpiolib-cdev.c like this:
>
> static u64 line_event_timestamp(struct line *line)
> {
> if (test_bit(FLAG_EVENT_CLOCK_REALTIME, &line->desc->flags))
> return ktime_get_real_ns();
>
> return ktime_get_ns();
> }
>
> What you want to do is to add a new flag for hardware timestamps
> and use that if available. FLAG_EVENT_CLOCK_HARDWARE?
> FLAG_EVENT_CLOCK_NATIVE?
>
HARDWARE looks better to me, as NATIVE is more vague.
> Then you need to figure out a mechanism so we can obtain
> the right timestamp from the hardware event right here,
> you can hook into the GPIO driver if need be, we can
> figure out the gpio_chip for a certain line for sure.
>
Firstly, line_event_timestamp() is called from the IRQ handler context.
That is obviously more constraining than if it were only called from the
IRQ thread. If the GTE is providing the timestamp then that could be
put off until the IRQ thread.
So you probably want to refactor line_event_timestamp() into two flavours
- one for IRQ handler that returns 0 if HARDWARE is set, and the other for
IRQ thread, where there is already a fallback call to
line_event_timestamp() for the nested threaded interrupt case, that gets
the timestamp from the GTE.
But my primary concern here would be keeping the two event FIFOs (GTE and
cdev) in sync. Masking and unmasking in hardware and the kernel needs to
be coordinated to prevent races that would result in sync loss.
So this probably needs to be configured in the GTE driver via the irq
path, rather than pinctrl?
Is every event detected by the GTE guaranteed to trigger an interrupt in
the kernel?
How to handle GTE FIFO overflows? Can they be detected or prevented?
> So you first need to augment the userspace
> ABI and the character device code to add this. See
> commit 26d060e47e25f2c715a1b2c48fea391f67907a30
> "gpiolib: cdev: allow edge event timestamps to be configured as REALTIME"
> by Kent Gibson to see what needs to be done.
>
You should also extend gpio_v2_line_flags_validate() to disallow setting
of multiple event clock flags, similar to the bias flag checks.
Currently there is only the one event clock flag, so no need to check.
> 3. Also patch tools/gpio/gpio-event-mon.c to support this flag and use that
> for prototyping and proof of concept.
>
The corresponding commit for the REALTIME option is
commit e0822cf9b892ed051830daaf57896aca48c8567b
"tools: gpio: add option to report wall-clock time to gpio-event-mon"
Cheers,
Kent.
> > ============
> > For IRQ:
> > ============
>
> Marc Zyngier and/or Thomas Gleixner know this stuff.
>
> It does make sense to add some infrastructure so that GPIO events
> and IRQs can use the same timestamping hardware.
>
> And certainly you will also want to use this timestamp for
> IIO devices? If it is just GPIOs and IRQs today, it will be
> gyroscopes and accelerometers tomorrow, am I right?
>
> Yours,
> Linus Walleij
Hi Linus and Kent,
Thanks you so much for your valuable inputs and time, Please see below, my follow ups.
On 3/21/21 11:00 PM, Kent Gibson wrote:
> On Sat, Mar 20, 2021 at 12:56:36PM +0100, Linus Walleij wrote:
>> Hi Dipen,
>>
>> thanks for your mail!
>>
>> I involved some other kernel people to get some discussion.
>> I think Kent Gibson can be of great help because he is using
>> GPIOs with high precision.
>>
>
> Actually I just extended the cdev uAPI to provide the REALTIME option,
> which was the event clock until we changed to MONOTONIC in Linux 5.7,
> as there were some users that were requiring the REALTIME clock.
>
>> We actually discussed this a bit when adding support for
>> realtime timestamps.
>>
>> On Wed, Mar 17, 2021 at 11:29 PM Dipen Patel <[email protected]> wrote:
>>
>>> Nvidia Tegra SoCs have generic timestamping engine (GTE) hardware module which
>>> can monitor SoC signals like IRQ lines and GPIO lines for state change, upon
>>> detecting the change, it can timestamp and store in its internal hardware FIFO.
>>> The advantage of the GTE module can be realized in applications like robotics
>>> or autonomous vehicle where it can help record events with precise timestamp.
>>
>> That sounds very useful.
>>
>
> Indeed - it could remove the latency and jitter that results from
> timestamping events in the IRQ handler.
>
>> Certainly the kernel shall be able to handle this.
>>
>>> ============
>>> For GPIO:
>>> ============
>>> 1. GPIO has to be configured as input and IRQ must be enabled.
>>> 2. Ask GPIO controller driver to set corresponding timestamp bit in the
>>> specified GPIO config register.
>>> 3. Translate GPIO specified by the client to its internal bitmap.
>>> 3.a For example, If client specifies GPIO line 31, it could be bit 13 of GTE
>>> register.
>>> 4. Set internal bits to enable monitoring in GTE module
>>> 5. Additionally GTE driver can open up lanes for the user space application
>>> as a client and can send timestamping events directly to the application.
>>
>> I have some concerns:
>>
>> 1. GPIO should for all professional applications be used with the character
>> device /dev/gpiochipN, under no circumstances shall the old sysfs
>> ABI be used for this. In this case it is necessary because the
>> character device provides events in a FIFO to userspace, which is
>> what we need.
>>
>
> The cdev uAPI would certainly be the most sensible place to expose
> this to userspace - its line events being a direct analog to what the GTE
> provides.
>
>> The timestamp provided to userspace is an opaque 64bit
>> unsigned value. I suppose we assume it is monotonic but
>> you can actually augment the semantics for your specific
>> stamp, as long as 64 bits is gonna work.
>>
>> 2. The timestamp for the chardev is currently obtained in
>> drivers/gpio/gpiolib-cdev.c like this:
>>
>> static u64 line_event_timestamp(struct line *line)
>> {
>> if (test_bit(FLAG_EVENT_CLOCK_REALTIME, &line->desc->flags))
>> return ktime_get_real_ns();
>>
>> return ktime_get_ns();
>> }
>>
>> What you want to do is to add a new flag for hardware timestamps
>> and use that if available. FLAG_EVENT_CLOCK_HARDWARE?
>> FLAG_EVENT_CLOCK_NATIVE?
>>
>
> HARDWARE looks better to me, as NATIVE is more vague.
>
>> Then you need to figure out a mechanism so we can obtain
>> the right timestamp from the hardware event right here,
>> you can hook into the GPIO driver if need be, we can
>> figure out the gpio_chip for a certain line for sure.
>>
>
> Firstly, line_event_timestamp() is called from the IRQ handler context.
> That is obviously more constraining than if it were only called from the
> IRQ thread. If the GTE is providing the timestamp then that could be
> put off until the IRQ thread.
> So you probably want to refactor line_event_timestamp() into two flavours
> - one for IRQ handler that returns 0 if HARDWARE is set, and the other for
> IRQ thread, where there is already a fallback call to
> line_event_timestamp() for the nested threaded interrupt case, that gets
> the timestamp from the GTE.
>
My follow-up concerns on both Linus's and Kent's feedback:
1. Please correct me if I am wrong, lineevent in the gpiolib* is only
serves the userspace clients.
1.a What about kernel drivers wanting to use this feature for monitoring its
GPIO lines, see gyroscope example somewhere below. In that regards,
lineevent implementation is not sufficient.
1.b Are you also implying to extend lineevent implementation to kernel
drivers?
2. For both above cases 1.a and 1.b, gpiolib* then would become holder
of all the GTE related datastructures per userspace or kernel clients,
is this acceptable? In another words, gpilib-cdev framework will become
client to GTE framework on behalf of those drivers. I believe we
can embed gte related data to per lineevent structures.
3. I believe Kent touched on this, but double confirming, there will be a
use-case or scenario where in-kernel clients will want to block until
the next timestaming event. We need to cover that scenario as well.
4. What about kernel drivers wanting monitor certain IRQ lines? For example,
gycroscope drivers wants to monitor i2c IRQ line for transaction complete.
4.a Or you are suggesting all the GPIOs related requests will go through
gpiolib-cdev --> gte framework ---> gte driver and for the rests, it
it will be "some kernel driver" ---> gte framework --> gte driver.
I am assuming there will be gte framework/infrastructure for all above cases.
> But my primary concern here would be keeping the two event FIFOs (GTE and
> cdev) in sync. Masking and unmasking in hardware and the kernel needs to
> be coordinated to prevent races that would result in sync loss.
> So this probably needs to be configured in the GTE driver via the irq
> path, rather than pinctrl?
>
> Is every event detected by the GTE guaranteed to trigger an interrupt in
> the kernel?
GTE interrupt will be triggered when its internal FIFO meets configurable
thresholds, which could be set to 1 for example, in that case will trigger
interrupt for every event detected.
Can you elaborate more on pinctrl part?
>
> How to handle GTE FIFO overflows? Can they be detected or prevented?
>
Currently, there is no hardware support to detect the overflow, it can be
done certainly through software.
>> So you first need to augment the userspace
>> ABI and the character device code to add this. See
>> commit 26d060e47e25f2c715a1b2c48fea391f67907a30
>> "gpiolib: cdev: allow edge event timestamps to be configured as REALTIME"
>> by Kent Gibson to see what needs to be done.
>>
>
> You should also extend gpio_v2_line_flags_validate() to disallow setting
> of multiple event clock flags, similar to the bias flag checks.
> Currently there is only the one event clock flag, so no need to check.
>
>> 3. Also patch tools/gpio/gpio-event-mon.c to support this flag and use that
>> for prototyping and proof of concept.
>>
>
> The corresponding commit for the REALTIME option is
> commit e0822cf9b892ed051830daaf57896aca48c8567b
> "tools: gpio: add option to report wall-clock time to gpio-event-mon"
>
> Cheers,
> Kent.
>
>>> ============
>>> For IRQ:
>>> ============
>>
>> Marc Zyngier and/or Thomas Gleixner know this stuff.
>>
>> It does make sense to add some infrastructure so that GPIO events
>> and IRQs can use the same timestamping hardware.
>>
>> And certainly you will also want to use this timestamp for
>> IIO devices? If it is just GPIOs and IRQs today, it will be
>> gyroscopes and accelerometers tomorrow, am I right?
>>
Gyroscope, accelerometers or any IIO are built on top of i2c/spi and/or GPIOs.
So they are covered as long as they serve as client to GTE framework, For
example, if gyroscope uses GPIO as an interrupt to indicate frame
ready, GTE could timestamp that GPIO as well any IRQs like i2c transaction
complete IRQ. To this to happen, gycroscope then register itself with
GTE framework and enable required signals that it interfaces/interested with.
>> Yours,
>> Linus Walleij
Best Regards,
Dipen Patel
Hi Richard,
Thanks for your input and time. Please see below follow up.
On 3/20/21 8:38 AM, Richard Cochran wrote:
> On Sat, Mar 20, 2021 at 01:44:20PM +0100, Arnd Bergmann wrote:
>> Adding Richard Cochran as well, for drivers/ptp/, he may be able to
>> identify whether this should be integrated into that framework in some
>> form.
>
> I'm not familiar with the GTE, but it sounds like it is a (free
> running?) clock with time stamping inputs. If so, then it could
> expose a PHC. That gets you functionality:
>
> - clock_gettime() and friends
> - comparison ioctl between GTE clock and CLOCK_REALTIME
> - time stamping channels with programmable input selection
>
GTE gets or rather records the timestamps from the TSC
(timestamp system coutner) so its not attached to GTE as any
one can access TSC, so not sure if we really need to implement PHC
and/or clock_* and friends for the GTE. I believe burden to find correlation
between various clock domains should be on the clients, consider below
example.
Networking client has access to both PTP and GTE, it would be its job
to find the correlations if that is at all needed based on whatever
use case that client serves. GTE in above may come in picture if said client
has some GPIO configured and wants timestamp on it.
Sorry if I misunderstood anything, you can elaborate more as I am also
interested in how GTE can fit in PTP framework and which usecase it can
help doing so.
> The mentioned applications (robotics and autonomous vehicle, so near
> and dear to my heart) surely already use the PHC API for dealing with
> network and system time sources, and so exposing the GTE as a PHC
> means that user space programs will have a consistent API.
>
> [ The only drawback I can see is the naming of the C language
> identifiers in include/uapi/linux/ptp_clock.h. If that bothers
> people, then these can be changed to something more generic while
> keeping compatibility aliases. ]
>
> Thanks,
> Richard
>
Thanks,
Best Regards,
Dipen Patel
On Mon, Mar 22, 2021 at 01:21:46PM -0700, Dipen Patel wrote:
> Hi Linus and Kent,
>
> Thanks you so much for your valuable inputs and time, Please see below, my follow ups.
>
> On 3/21/21 11:00 PM, Kent Gibson wrote:
> > On Sat, Mar 20, 2021 at 12:56:36PM +0100, Linus Walleij wrote:
> >> Hi Dipen,
> >>
> >> thanks for your mail!
> >>
> >> I involved some other kernel people to get some discussion.
> >> I think Kent Gibson can be of great help because he is using
> >> GPIOs with high precision.
> >>
> >
> > Actually I just extended the cdev uAPI to provide the REALTIME option,
> > which was the event clock until we changed to MONOTONIC in Linux 5.7,
> > as there were some users that were requiring the REALTIME clock.
> >
> >> We actually discussed this a bit when adding support for
> >> realtime timestamps.
> >>
> >> On Wed, Mar 17, 2021 at 11:29 PM Dipen Patel <[email protected]> wrote:
> >>
> >>> Nvidia Tegra SoCs have generic timestamping engine (GTE) hardware module which
> >>> can monitor SoC signals like IRQ lines and GPIO lines for state change, upon
> >>> detecting the change, it can timestamp and store in its internal hardware FIFO.
> >>> The advantage of the GTE module can be realized in applications like robotics
> >>> or autonomous vehicle where it can help record events with precise timestamp.
> >>
> >> That sounds very useful.
> >>
> >
> > Indeed - it could remove the latency and jitter that results from
> > timestamping events in the IRQ handler.
> >
> >> Certainly the kernel shall be able to handle this.
> >>
> >>> ============
> >>> For GPIO:
> >>> ============
> >>> 1. GPIO has to be configured as input and IRQ must be enabled.
> >>> 2. Ask GPIO controller driver to set corresponding timestamp bit in the
> >>> specified GPIO config register.
> >>> 3. Translate GPIO specified by the client to its internal bitmap.
> >>> 3.a For example, If client specifies GPIO line 31, it could be bit 13 of GTE
> >>> register.
> >>> 4. Set internal bits to enable monitoring in GTE module
> >>> 5. Additionally GTE driver can open up lanes for the user space application
> >>> as a client and can send timestamping events directly to the application.
> >>
> >> I have some concerns:
> >>
> >> 1. GPIO should for all professional applications be used with the character
> >> device /dev/gpiochipN, under no circumstances shall the old sysfs
> >> ABI be used for this. In this case it is necessary because the
> >> character device provides events in a FIFO to userspace, which is
> >> what we need.
> >>
> >
> > The cdev uAPI would certainly be the most sensible place to expose
> > this to userspace - its line events being a direct analog to what the GTE
> > provides.
> >
> >> The timestamp provided to userspace is an opaque 64bit
> >> unsigned value. I suppose we assume it is monotonic but
> >> you can actually augment the semantics for your specific
> >> stamp, as long as 64 bits is gonna work.
> >>
> >> 2. The timestamp for the chardev is currently obtained in
> >> drivers/gpio/gpiolib-cdev.c like this:
> >>
> >> static u64 line_event_timestamp(struct line *line)
> >> {
> >> if (test_bit(FLAG_EVENT_CLOCK_REALTIME, &line->desc->flags))
> >> return ktime_get_real_ns();
> >>
> >> return ktime_get_ns();
> >> }
> >>
> >> What you want to do is to add a new flag for hardware timestamps
> >> and use that if available. FLAG_EVENT_CLOCK_HARDWARE?
> >> FLAG_EVENT_CLOCK_NATIVE?
> >>
> >
> > HARDWARE looks better to me, as NATIVE is more vague.
> >
> >> Then you need to figure out a mechanism so we can obtain
> >> the right timestamp from the hardware event right here,
> >> you can hook into the GPIO driver if need be, we can
> >> figure out the gpio_chip for a certain line for sure.
> >>
> >
> > Firstly, line_event_timestamp() is called from the IRQ handler context.
> > That is obviously more constraining than if it were only called from the
> > IRQ thread. If the GTE is providing the timestamp then that could be
> > put off until the IRQ thread.
> > So you probably want to refactor line_event_timestamp() into two flavours
> > - one for IRQ handler that returns 0 if HARDWARE is set, and the other for
> > IRQ thread, where there is already a fallback call to
> > line_event_timestamp() for the nested threaded interrupt case, that gets
> > the timestamp from the GTE.
> >
>
> My follow-up concerns on both Linus's and Kent's feedback:
>
> 1. Please correct me if I am wrong, lineevent in the gpiolib* is only
> serves the userspace clients.
> 1.a What about kernel drivers wanting to use this feature for monitoring its
> GPIO lines, see gyroscope example somewhere below. In that regards,
> lineevent implementation is not sufficient.
> 1.b Are you also implying to extend lineevent implementation to kernel
> drivers?
> 2. For both above cases 1.a and 1.b, gpiolib* then would become holder
> of all the GTE related datastructures per userspace or kernel clients,
> is this acceptable? In another words, gpilib-cdev framework will become
> client to GTE framework on behalf of those drivers. I believe we
> can embed gte related data to per lineevent structures.
> 3. I believe Kent touched on this, but double confirming, there will be a
> use-case or scenario where in-kernel clients will want to block until
> the next timestaming event. We need to cover that scenario as well.
> 4. What about kernel drivers wanting monitor certain IRQ lines? For example,
> gycroscope drivers wants to monitor i2c IRQ line for transaction complete.
> 4.a Or you are suggesting all the GPIOs related requests will go through
> gpiolib-cdev --> gte framework ---> gte driver and for the rests, it
> it will be "some kernel driver" ---> gte framework --> gte driver.
>
> I am assuming there will be gte framework/infrastructure for all above cases.
>
> > But my primary concern here would be keeping the two event FIFOs (GTE and
> > cdev) in sync. Masking and unmasking in hardware and the kernel needs to
> > be coordinated to prevent races that would result in sync loss.
> > So this probably needs to be configured in the GTE driver via the irq
> > path, rather than pinctrl?
> >
> > Is every event detected by the GTE guaranteed to trigger an interrupt in
> > the kernel?
>
> GTE interrupt will be triggered when its internal FIFO meets configurable
> thresholds, which could be set to 1 for example, in that case will trigger
> interrupt for every event detected.
>
> Can you elaborate more on pinctrl part?
>
> >
> > How to handle GTE FIFO overflows? Can they be detected or prevented?
> >
> Currently, there is no hardware support to detect the overflow, it can be
> done certainly through software.
>
In response to all your comments above...
Firstly, I'm not suggesting that other kernel modules would use the
cdev lineevents, only that they would use the same mechanism that
gpiolib-cdev would use to timestamp the lineevents for userspace.
As to that mechanism, my current thinking is that the approach of
associating GTE event FIFO entries with particular physical IRQ events is
problematic, as keeping the two in sync would be difficult, if not
impossible.
A more robust approach is to ignore the physical IRQs and instead service
the GTE event FIFO, generating IRQs from those events in software -
essentially a pre-timestamped IRQ. The IRQ framework could provide the
timestamping functionality, equivalent to line_event_timestamp(), for
the IRQ handler/thread and in this case provide the timestamp from the GTE
event.
So this would be an IRQ feature of which the gpiolib would just be another
client. But I don't know enough about the IRQ framework to do more
than just float the idea - I'm already way out over my skis here.
Btw, the pinctrl API is what the gpiolib uses to control the GPIO aspects
of physical hardware such as direction, value, bias and the like.
Almost certainly not relevant to this feature, so forget I mentioned it.
Cheers,
Kent.
> >> So you first need to augment the userspace
> >> ABI and the character device code to add this. See
> >> commit 26d060e47e25f2c715a1b2c48fea391f67907a30
> >> "gpiolib: cdev: allow edge event timestamps to be configured as REALTIME"
> >> by Kent Gibson to see what needs to be done.
> >>
> >
> > You should also extend gpio_v2_line_flags_validate() to disallow setting
> > of multiple event clock flags, similar to the bias flag checks.
> > Currently there is only the one event clock flag, so no need to check.
> >
> >> 3. Also patch tools/gpio/gpio-event-mon.c to support this flag and use that
> >> for prototyping and proof of concept.
> >>
> >
> > The corresponding commit for the REALTIME option is
> > commit e0822cf9b892ed051830daaf57896aca48c8567b
> > "tools: gpio: add option to report wall-clock time to gpio-event-mon"
> >
> > Cheers,
> > Kent.
> >
> >>> ============
> >>> For IRQ:
> >>> ============
> >>
> >> Marc Zyngier and/or Thomas Gleixner know this stuff.
> >>
> >> It does make sense to add some infrastructure so that GPIO events
> >> and IRQs can use the same timestamping hardware.
> >>
> >> And certainly you will also want to use this timestamp for
> >> IIO devices? If it is just GPIOs and IRQs today, it will be
> >> gyroscopes and accelerometers tomorrow, am I right?
> >>
>
> Gyroscope, accelerometers or any IIO are built on top of i2c/spi and/or GPIOs.
> So they are covered as long as they serve as client to GTE framework, For
> example, if gyroscope uses GPIO as an interrupt to indicate frame
> ready, GTE could timestamp that GPIO as well any IRQs like i2c transaction
> complete IRQ. To this to happen, gycroscope then register itself with
> GTE framework and enable required signals that it interfaces/interested with.
>
> >> Yours,
> >> Linus Walleij
>
> Best Regards,
> Dipen Patel
On 3/22/21 5:32 PM, Kent Gibson wrote:
> On Mon, Mar 22, 2021 at 01:21:46PM -0700, Dipen Patel wrote:
>> Hi Linus and Kent,
>>
>> Thanks you so much for your valuable inputs and time, Please see below, my follow ups.
>>
>> On 3/21/21 11:00 PM, Kent Gibson wrote:
>>> On Sat, Mar 20, 2021 at 12:56:36PM +0100, Linus Walleij wrote:
>>>> Hi Dipen,
>>>>
>>>> thanks for your mail!
>>>>
>>>> I involved some other kernel people to get some discussion.
>>>> I think Kent Gibson can be of great help because he is using
>>>> GPIOs with high precision.
>>>>
>>>
>>> Actually I just extended the cdev uAPI to provide the REALTIME option,
>>> which was the event clock until we changed to MONOTONIC in Linux 5.7,
>>> as there were some users that were requiring the REALTIME clock.
>>>
>>>> We actually discussed this a bit when adding support for
>>>> realtime timestamps.
>>>>
>>>> On Wed, Mar 17, 2021 at 11:29 PM Dipen Patel <[email protected]> wrote:
>>>>
>>>>> Nvidia Tegra SoCs have generic timestamping engine (GTE) hardware module which
>>>>> can monitor SoC signals like IRQ lines and GPIO lines for state change, upon
>>>>> detecting the change, it can timestamp and store in its internal hardware FIFO.
>>>>> The advantage of the GTE module can be realized in applications like robotics
>>>>> or autonomous vehicle where it can help record events with precise timestamp.
>>>>
>>>> That sounds very useful.
>>>>
>>>
>>> Indeed - it could remove the latency and jitter that results from
>>> timestamping events in the IRQ handler.
>>>
>>>> Certainly the kernel shall be able to handle this.
>>>>
>>>>> ============
>>>>> For GPIO:
>>>>> ============
>>>>> 1. GPIO has to be configured as input and IRQ must be enabled.
>>>>> 2. Ask GPIO controller driver to set corresponding timestamp bit in the
>>>>> specified GPIO config register.
>>>>> 3. Translate GPIO specified by the client to its internal bitmap.
>>>>> 3.a For example, If client specifies GPIO line 31, it could be bit 13 of GTE
>>>>> register.
>>>>> 4. Set internal bits to enable monitoring in GTE module
>>>>> 5. Additionally GTE driver can open up lanes for the user space application
>>>>> as a client and can send timestamping events directly to the application.
>>>>
>>>> I have some concerns:
>>>>
>>>> 1. GPIO should for all professional applications be used with the character
>>>> device /dev/gpiochipN, under no circumstances shall the old sysfs
>>>> ABI be used for this. In this case it is necessary because the
>>>> character device provides events in a FIFO to userspace, which is
>>>> what we need.
>>>>
>>>
>>> The cdev uAPI would certainly be the most sensible place to expose
>>> this to userspace - its line events being a direct analog to what the GTE
>>> provides.
>>>
>>>> The timestamp provided to userspace is an opaque 64bit
>>>> unsigned value. I suppose we assume it is monotonic but
>>>> you can actually augment the semantics for your specific
>>>> stamp, as long as 64 bits is gonna work.
>>>>
>>>> 2. The timestamp for the chardev is currently obtained in
>>>> drivers/gpio/gpiolib-cdev.c like this:
>>>>
>>>> static u64 line_event_timestamp(struct line *line)
>>>> {
>>>> if (test_bit(FLAG_EVENT_CLOCK_REALTIME, &line->desc->flags))
>>>> return ktime_get_real_ns();
>>>>
>>>> return ktime_get_ns();
>>>> }
>>>>
>>>> What you want to do is to add a new flag for hardware timestamps
>>>> and use that if available. FLAG_EVENT_CLOCK_HARDWARE?
>>>> FLAG_EVENT_CLOCK_NATIVE?
>>>>
>>>
>>> HARDWARE looks better to me, as NATIVE is more vague.
>>>
>>>> Then you need to figure out a mechanism so we can obtain
>>>> the right timestamp from the hardware event right here,
>>>> you can hook into the GPIO driver if need be, we can
>>>> figure out the gpio_chip for a certain line for sure.
>>>>
>>>
>>> Firstly, line_event_timestamp() is called from the IRQ handler context.
>>> That is obviously more constraining than if it were only called from the
>>> IRQ thread. If the GTE is providing the timestamp then that could be
>>> put off until the IRQ thread.
>>> So you probably want to refactor line_event_timestamp() into two flavours
>>> - one for IRQ handler that returns 0 if HARDWARE is set, and the other for
>>> IRQ thread, where there is already a fallback call to
>>> line_event_timestamp() for the nested threaded interrupt case, that gets
>>> the timestamp from the GTE.
>>>
>>
>> My follow-up concerns on both Linus's and Kent's feedback:
>>
>> 1. Please correct me if I am wrong, lineevent in the gpiolib* is only
>> serves the userspace clients.
>> 1.a What about kernel drivers wanting to use this feature for monitoring its
>> GPIO lines, see gyroscope example somewhere below. In that regards,
>> lineevent implementation is not sufficient.
>> 1.b Are you also implying to extend lineevent implementation to kernel
>> drivers?
>> 2. For both above cases 1.a and 1.b, gpiolib* then would become holder
>> of all the GTE related datastructures per userspace or kernel clients,
>> is this acceptable? In another words, gpilib-cdev framework will become
>> client to GTE framework on behalf of those drivers. I believe we
>> can embed gte related data to per lineevent structures.
>> 3. I believe Kent touched on this, but double confirming, there will be a
>> use-case or scenario where in-kernel clients will want to block until
>> the next timestaming event. We need to cover that scenario as well.
>> 4. What about kernel drivers wanting monitor certain IRQ lines? For example,
>> gycroscope drivers wants to monitor i2c IRQ line for transaction complete.
>> 4.a Or you are suggesting all the GPIOs related requests will go through
>> gpiolib-cdev --> gte framework ---> gte driver and for the rests, it
>> it will be "some kernel driver" ---> gte framework --> gte driver.
>>
>> I am assuming there will be gte framework/infrastructure for all above cases.
>>
>>> But my primary concern here would be keeping the two event FIFOs (GTE and
>>> cdev) in sync. Masking and unmasking in hardware and the kernel needs to
>>> be coordinated to prevent races that would result in sync loss.
>>> So this probably needs to be configured in the GTE driver via the irq
>>> path, rather than pinctrl?
>>>
>>> Is every event detected by the GTE guaranteed to trigger an interrupt in
>>> the kernel?
>>
>> GTE interrupt will be triggered when its internal FIFO meets configurable
>> thresholds, which could be set to 1 for example, in that case will trigger
>> interrupt for every event detected.
>>
>> Can you elaborate more on pinctrl part?
>>
>>>
>>> How to handle GTE FIFO overflows? Can they be detected or prevented?
>>>
>> Currently, there is no hardware support to detect the overflow, it can be
>> done certainly through software.
>>
>
> In response to all your comments above...
>
> Firstly, I'm not suggesting that other kernel modules would use the
> cdev lineevents, only that they would use the same mechanism that
> gpiolib-cdev would use to timestamp the lineevents for userspace.
>
Sure, I just wanted to mention the different scenarios and wanted to know
how can we fit all those together. Having said that, shouldn't this serve
an opportunity to extend the linevent framework to accommodate kernel
drivers as a clients?
If we can't, then there is a risk of duplicating lineevent mechanism in all
of those kernel drivers or at least in GTE framework/infrastructure as far
as GPIO related GTE part is concerned.
> As to that mechanism, my current thinking is that the approach of
> associating GTE event FIFO entries with particular physical IRQ events is
> problematic, as keeping the two in sync would be difficult, if not
> impossible.
>
> A more robust approach is to ignore the physical IRQs and instead service
> the GTE event FIFO, generating IRQs from those events in software -
> essentially a pre-timestamped IRQ. The IRQ framework could provide the
> timestamping functionality, equivalent to line_event_timestamp(), for
> the IRQ handler/thread and in this case provide the timestamp from the GTE
> event.
>
I have not fully understood above two paragraphs (except about
lineevent_event_timestamp related part).
I have no idea what it means to "ignore the physical IRQs and service the
GTE event FIFO". Just like GPIO clients, there could be IRQ clients which
want to monitor certain IRQ line, like ethernet driver wanted to retrieve
timestamp for its IRQ line and so on.
> So this would be an IRQ feature of which the gpiolib would just be another
> client. But I don't know enough about the IRQ framework to do more
> than just float the idea - I'm already way out over my skis here.
>
> Btw, the pinctrl API is what the gpiolib uses to control the GPIO aspects
> of physical hardware such as direction, value, bias and the like.
> Almost certainly not relevant to this feature, so forget I mentioned it.
>
> Cheers,
> Kent.
>
>>>> So you first need to augment the userspace
>>>> ABI and the character device code to add this. See
>>>> commit 26d060e47e25f2c715a1b2c48fea391f67907a30
>>>> "gpiolib: cdev: allow edge event timestamps to be configured as REALTIME"
>>>> by Kent Gibson to see what needs to be done.
>>>>
>>>
>>> You should also extend gpio_v2_line_flags_validate() to disallow setting
>>> of multiple event clock flags, similar to the bias flag checks.
>>> Currently there is only the one event clock flag, so no need to check.
>>>
>>>> 3. Also patch tools/gpio/gpio-event-mon.c to support this flag and use that
>>>> for prototyping and proof of concept.
>>>>
>>>
>>> The corresponding commit for the REALTIME option is
>>> commit e0822cf9b892ed051830daaf57896aca48c8567b
>>> "tools: gpio: add option to report wall-clock time to gpio-event-mon"
>>>
>>> Cheers,
>>> Kent.
>>>
>>>>> ============
>>>>> For IRQ:
>>>>> ============
>>>>
>>>> Marc Zyngier and/or Thomas Gleixner know this stuff.
>>>>
>>>> It does make sense to add some infrastructure so that GPIO events
>>>> and IRQs can use the same timestamping hardware.
>>>>
>>>> And certainly you will also want to use this timestamp for
>>>> IIO devices? If it is just GPIOs and IRQs today, it will be
>>>> gyroscopes and accelerometers tomorrow, am I right?
>>>>
>>
>> Gyroscope, accelerometers or any IIO are built on top of i2c/spi and/or GPIOs.
>> So they are covered as long as they serve as client to GTE framework, For
>> example, if gyroscope uses GPIO as an interrupt to indicate frame
>> ready, GTE could timestamp that GPIO as well any IRQs like i2c transaction
>> complete IRQ. To this to happen, gycroscope then register itself with
>> GTE framework and enable required signals that it interfaces/interested with.
>>
>>>> Yours,
>>>> Linus Walleij
>>
>> Best Regards,
>> Dipen Patel
On Mon, Mar 22, 2021 at 06:53:10PM -0700, Dipen Patel wrote:
>
>
> On 3/22/21 5:32 PM, Kent Gibson wrote:
> > On Mon, Mar 22, 2021 at 01:21:46PM -0700, Dipen Patel wrote:
> >> Hi Linus and Kent,
> >>
[snip]
> > In response to all your comments above...
> >
> > Firstly, I'm not suggesting that other kernel modules would use the
> > cdev lineevents, only that they would use the same mechanism that
> > gpiolib-cdev would use to timestamp the lineevents for userspace.
> >
> Sure, I just wanted to mention the different scenarios and wanted to know
> how can we fit all those together. Having said that, shouldn't this serve
> an opportunity to extend the linevent framework to accommodate kernel
> drivers as a clients?
>
> If we can't, then there is a risk of duplicating lineevent mechanism in all
> of those kernel drivers or at least in GTE framework/infrastructure as far
> as GPIO related GTE part is concerned.
>
In-kernel the lineevents are just IRQs so anything needing a "lineevent"
can request the IRQ directly. Or am I missing something?
> > As to that mechanism, my current thinking is that the approach of
> > associating GTE event FIFO entries with particular physical IRQ events is
> > problematic, as keeping the two in sync would be difficult, if not
> > impossible.
> >
> > A more robust approach is to ignore the physical IRQs and instead service
> > the GTE event FIFO, generating IRQs from those events in software -
> > essentially a pre-timestamped IRQ. The IRQ framework could provide the
> > timestamping functionality, equivalent to line_event_timestamp(), for
> > the IRQ handler/thread and in this case provide the timestamp from the GTE
> > event.
> >
>
> I have not fully understood above two paragraphs (except about
> lineevent_event_timestamp related part).
>
> I have no idea what it means to "ignore the physical IRQs and service the
> GTE event FIFO". Just like GPIO clients, there could be IRQ clients which
> want to monitor certain IRQ line, like ethernet driver wanted to retrieve
> timestamp for its IRQ line and so on.
>
I mean that in the IRQ framework, rather than enabling the physical IRQ
line it would leave that masked and would instead enable the FIFO line to
service the FIFO, configure the GTE to generate the events for that
line, and then generate IRQs in response to the FIFO events.
That way the client requesting the IRQ is guaranteed to only receive an
IRQ that corresponds to a GTE FIFO event and the timestamp stored in the
IRQ framework would match.
And that is what I mean by this being an IRQ feature.
We need feedback from the IRQ guys as to whether that makes sense to
them.
Cheers,
Kent.
On 3/22/21 7:59 PM, Kent Gibson wrote:
> On Mon, Mar 22, 2021 at 06:53:10PM -0700, Dipen Patel wrote:
>>
>>
>> On 3/22/21 5:32 PM, Kent Gibson wrote:
>>> On Mon, Mar 22, 2021 at 01:21:46PM -0700, Dipen Patel wrote:
>>>> Hi Linus and Kent,
>>>>
>
> [snip]
>
>>> In response to all your comments above...
>>>
>>> Firstly, I'm not suggesting that other kernel modules would use the
>>> cdev lineevents, only that they would use the same mechanism that
>>> gpiolib-cdev would use to timestamp the lineevents for userspace.
>>>
>> Sure, I just wanted to mention the different scenarios and wanted to know
>> how can we fit all those together. Having said that, shouldn't this serve
>> an opportunity to extend the linevent framework to accommodate kernel
>> drivers as a clients?
>>
>> If we can't, then there is a risk of duplicating lineevent mechanism in all
>> of those kernel drivers or at least in GTE framework/infrastructure as far
>> as GPIO related GTE part is concerned.
>>
>
> In-kernel the lineevents are just IRQs so anything needing a "lineevent"
> can request the IRQ directly. Or am I missing something?
>
In the GPIO context, I meant we can extend line_event_timestamp to kernel
drivers as well in that way, both userspace and kernel drivers requesting
particular GPIO for the hardware timestamp would be managed by same
lineevent_* infrastructure from the gpiolib. Something like lineevent_create
version of the kernel drivers, so if they need hardware timestamp on the
GPIO line, they can request with some flags. In that way, GTE can leverage
linevent* codes from gpiolib to cover its both the GPIO related use cases i.e.
userspace app and kernel drivers.
>>> As to that mechanism, my current thinking is that the approach of
>>> associating GTE event FIFO entries with particular physical IRQ events is
>>> problematic, as keeping the two in sync would be difficult, if not
>>> impossible.
>>>
>>> A more robust approach is to ignore the physical IRQs and instead service
>>> the GTE event FIFO, generating IRQs from those events in software -
>>> essentially a pre-timestamped IRQ. The IRQ framework could provide the
>>> timestamping functionality, equivalent to line_event_timestamp(), for
>>> the IRQ handler/thread and in this case provide the timestamp from the GTE
>>> event.
>>>
>>
>> I have not fully understood above two paragraphs (except about
>> lineevent_event_timestamp related part).
>>
>> I have no idea what it means to "ignore the physical IRQs and service the
>> GTE event FIFO". Just like GPIO clients, there could be IRQ clients which
>> want to monitor certain IRQ line, like ethernet driver wanted to retrieve
>> timestamp for its IRQ line and so on.
>>
>
> I mean that in the IRQ framework, rather than enabling the physical IRQ
> line it would leave that masked and would instead enable the FIFO line to
> service the FIFO, configure the GTE to generate the events for that
> line, and then generate IRQs in response to the FIFO events.
> That way the client requesting the IRQ is guaranteed to only receive an
> IRQ that corresponds to a GTE FIFO event and the timestamp stored in the
> IRQ framework would match.
>
I do not think we need to do such things, for example, below is
the rough sequence how GTE can notify its clients be it GPIO or IRQ
lines. I believe this will also help understand better ongoing GPIO
discussions.
1. Configure GTE FIFO watermark or threshold, lets assume 1, i.e
generate GTE interrupt when FIFO depth is 1.
2. In the GTE ISR or ISR thread, drain internal FIFO entries
3. Through GTE driver's internal mapping, identify which IRQ or
GPIO number this entry belongs to. (This is possible as GTE
has predefined bits for each supported signals, for example GTE
supports 40 GPIOs and 352 IRQ lines, and it has multliple GTE instances
which can take care all of them)
4. GTE driver pushes the event data (in this case it will be timestamp and
direction of the event ie.rising or falling) to the GTE generic framework
5. GTE framework will store per event data to its per client/event sw FIFO
6. wake up any sleeping client thread
7. Points 3 to 6 are happening in GTE ISR context.
8. gte_retrieve_event (which can block if no event) at later convenient
time do whatever it wants with it. We can extend it to non blocking
version where some sort of client callbacks can be implemented.
> And that is what I mean by this being an IRQ feature.
> We need feedback from the IRQ guys as to whether that makes sense to
> them.
>
> Cheers,
> Kent.
>
Best Regards,
Dipen Patel
On Mon, Mar 22, 2021 at 09:09:50PM -0700, Dipen Patel wrote:
>
>
> On 3/22/21 7:59 PM, Kent Gibson wrote:
> > On Mon, Mar 22, 2021 at 06:53:10PM -0700, Dipen Patel wrote:
> >>
> >>
> >> On 3/22/21 5:32 PM, Kent Gibson wrote:
> >>> On Mon, Mar 22, 2021 at 01:21:46PM -0700, Dipen Patel wrote:
> >>>> Hi Linus and Kent,
> >>>>
> >
> > [snip]
> >
> >>> In response to all your comments above...
> >>>
> >>> Firstly, I'm not suggesting that other kernel modules would use the
> >>> cdev lineevents, only that they would use the same mechanism that
> >>> gpiolib-cdev would use to timestamp the lineevents for userspace.
> >>>
> >> Sure, I just wanted to mention the different scenarios and wanted to know
> >> how can we fit all those together. Having said that, shouldn't this serve
> >> an opportunity to extend the linevent framework to accommodate kernel
> >> drivers as a clients?
> >>
> >> If we can't, then there is a risk of duplicating lineevent mechanism in all
> >> of those kernel drivers or at least in GTE framework/infrastructure as far
> >> as GPIO related GTE part is concerned.
> >>
> >
> > In-kernel the lineevents are just IRQs so anything needing a "lineevent"
> > can request the IRQ directly. Or am I missing something?
> >
>
> In the GPIO context, I meant we can extend line_event_timestamp to kernel
> drivers as well in that way, both userspace and kernel drivers requesting
> particular GPIO for the hardware timestamp would be managed by same
> lineevent_* infrastructure from the gpiolib. Something like lineevent_create
> version of the kernel drivers, so if they need hardware timestamp on the
> GPIO line, they can request with some flags. In that way, GTE can leverage
> linevent* codes from gpiolib to cover its both the GPIO related use cases i.e.
> userspace app and kernel drivers.
>
I still don't see what that gives you that is better than an IRQ and a
function to provide the timestamp for that IRQ. What specific features
of a lineevent are you after?
The gpiolib-cdev code is there to provide a palettable API for userspace,
and the bulk of that code is specific to the userspace API.
Reusing that code for clients within the kernel is just introducing
pointless overhead when they can get what they need more directly.
There may be a case for some additional gpiolib/irq helper functions, but
I don't see gpiolib-cdev as a good fit for that role.
> >>> As to that mechanism, my current thinking is that the approach of
> >>> associating GTE event FIFO entries with particular physical IRQ events is
> >>> problematic, as keeping the two in sync would be difficult, if not
> >>> impossible.
> >>>
> >>> A more robust approach is to ignore the physical IRQs and instead service
> >>> the GTE event FIFO, generating IRQs from those events in software -
> >>> essentially a pre-timestamped IRQ. The IRQ framework could provide the
> >>> timestamping functionality, equivalent to line_event_timestamp(), for
> >>> the IRQ handler/thread and in this case provide the timestamp from the GTE
> >>> event.
> >>>
> >>
> >> I have not fully understood above two paragraphs (except about
> >> lineevent_event_timestamp related part).
> >>
> >> I have no idea what it means to "ignore the physical IRQs and service the
> >> GTE event FIFO". Just like GPIO clients, there could be IRQ clients which
> >> want to monitor certain IRQ line, like ethernet driver wanted to retrieve
> >> timestamp for its IRQ line and so on.
> >>
> >
> > I mean that in the IRQ framework, rather than enabling the physical IRQ
> > line it would leave that masked and would instead enable the FIFO line to
> > service the FIFO, configure the GTE to generate the events for that
> > line, and then generate IRQs in response to the FIFO events.
> > That way the client requesting the IRQ is guaranteed to only receive an
> > IRQ that corresponds to a GTE FIFO event and the timestamp stored in the
> > IRQ framework would match.
> >
>
> I do not think we need to do such things, for example, below is
> the rough sequence how GTE can notify its clients be it GPIO or IRQ
> lines. I believe this will also help understand better ongoing GPIO
> discussions.
>
> 1. Configure GTE FIFO watermark or threshold, lets assume 1, i.e
> generate GTE interrupt when FIFO depth is 1.
> 2. In the GTE ISR or ISR thread, drain internal FIFO entries
> 3. Through GTE driver's internal mapping, identify which IRQ or
> GPIO number this entry belongs to. (This is possible as GTE
> has predefined bits for each supported signals, for example GTE
> supports 40 GPIOs and 352 IRQ lines, and it has multliple GTE instances
> which can take care all of them)
> 4. GTE driver pushes the event data (in this case it will be timestamp and
> direction of the event ie.rising or falling) to the GTE generic framework
> 5. GTE framework will store per event data to its per client/event sw FIFO
> 6. wake up any sleeping client thread
> 7. Points 3 to 6 are happening in GTE ISR context.
> 8. gte_retrieve_event (which can block if no event) at later convenient
> time do whatever it wants with it. We can extend it to non blocking
> version where some sort of client callbacks can be implemented.
>
Don't see where that conflicts with my suggestion except that I moved
everything, other than the hardware specific configuration, under the IRQ
umbrella as a lot of the functionality, in the abstract, aligns with IRQ.
If you want to reduce duplication shouldn't GTE be integrated into IRQ
rather than providing a new service?
As I see it, GTE is just another event source for IRQ, and I'd be
exploring that path. But that is just my view.
Cheers,
Kent.
On Mon, Mar 22, 2021 at 01:33:38PM -0700, Dipen Patel wrote:
> Hi Richard,
>
> Thanks for your input and time. Please see below follow up.
>
> On 3/20/21 8:38 AM, Richard Cochran wrote:
> > On Sat, Mar 20, 2021 at 01:44:20PM +0100, Arnd Bergmann wrote:
> >> Adding Richard Cochran as well, for drivers/ptp/, he may be able to
> >> identify whether this should be integrated into that framework in some
> >> form.
> >
> > I'm not familiar with the GTE, but it sounds like it is a (free
> > running?) clock with time stamping inputs. If so, then it could
> > expose a PHC. That gets you functionality:
> >
> > - clock_gettime() and friends
> > - comparison ioctl between GTE clock and CLOCK_REALTIME
> > - time stamping channels with programmable input selection
> >
> GTE gets or rather records the timestamps from the TSC
> (timestamp system coutner) so its not attached to GTE as any
> one can access TSC, so not sure if we really need to implement PHC
> and/or clock_* and friends for the GTE. I believe burden to find correlation
> between various clock domains should be on the clients, consider below
> example.
I agree. My understanding is the the TSC is basically an SoC-wide clock
that can be (and is) used by several hardware blocks. There's an
interface for software to read out the value, but it's part of a block
called TKE (time-keeping engine, if I recall correctly) that implements
various clock sources and watchdog functionality.
As a matter of fact, I recall typing up a driver for that at some point
but I don't recall if I ever sent it out or what became of it. I can't
find it upstream at least.
Anyway, I think given that the GTE doesn't provide that clock itself but
rather just a means of taking a snapshot of that clock and stamping
certain events with that, it makes more sense to provide that clock from
the TKE driver.
Thierry
On Mon, Mar 22, 2021 at 9:17 PM Dipen Patel <[email protected]> wrote:
> My follow-up concerns on both Linus's and Kent's feedback:
>
> 1. Please correct me if I am wrong, lineevent in the gpiolib* is only
> serves the userspace clients.
> 1.a What about kernel drivers wanting to use this feature for monitoring its
> GPIO lines, see gyroscope example somewhere below. In that regards,
> lineevent implementation is not sufficient.
> 1.b Are you also implying to extend lineevent implementation to kernel
> drivers?
I was talking about lineevent because you mentioned things like
motors and robotics, and those things are traditionally not run in
kernelspace because they are not generic hardware that fit in the
kernel subsystems.
Normally industrial automatic control tasks are run in a userspace
thread with some realtime priority.
As Kent says, in-kernel events are exclusively using IRQ as
mechanism, and should be modeled as IRQs. Then the question
is how you join the timestamp with the IRQ. GPIO chips are
just some kind of irqchip in this regard, we reuse the irqchip
infrastructure in the kernel for all GPIO drivers that generate
"events" in response to state transitions on digital lines.
> >> And certainly you will also want to use this timestamp for
> >> IIO devices? If it is just GPIOs and IRQs today, it will be
> >> gyroscopes and accelerometers tomorrow, am I right?
> >>
>
> Gyroscope, accelerometers or any IIO are built on top of i2c/spi and/or GPIOs.
> So they are covered as long as they serve as client to GTE framework, For
> example, if gyroscope uses GPIO as an interrupt to indicate frame
> ready, GTE could timestamp that GPIO as well any IRQs like i2c transaction
> complete IRQ. To this to happen, gycroscope then register itself with
> GTE framework and enable required signals that it interfaces/interested with.
I think there are IIO devices that provide their own
hardware timestamp and as such they might want to use that,
so the mechanism need to be generic enough that a certain
hardware timestamp can be selected sooner or later.
But let's not overcomplicate things for now.
Yours,
Linus Walleij
On Tue, Mar 23, 2021 at 10:08:00AM +0100, Linus Walleij wrote:
> On Mon, Mar 22, 2021 at 9:17 PM Dipen Patel <[email protected]> wrote:
>
> > My follow-up concerns on both Linus's and Kent's feedback:
> >
> > 1. Please correct me if I am wrong, lineevent in the gpiolib* is only
> > serves the userspace clients.
> > 1.a What about kernel drivers wanting to use this feature for monitoring its
> > GPIO lines, see gyroscope example somewhere below. In that regards,
> > lineevent implementation is not sufficient.
> > 1.b Are you also implying to extend lineevent implementation to kernel
> > drivers?
>
> I was talking about lineevent because you mentioned things like
> motors and robotics, and those things are traditionally not run in
> kernelspace because they are not generic hardware that fit in the
> kernel subsystems.
>
> Normally industrial automatic control tasks are run in a userspace
> thread with some realtime priority.
>
> As Kent says, in-kernel events are exclusively using IRQ as
> mechanism, and should be modeled as IRQs. Then the question
> is how you join the timestamp with the IRQ. GPIO chips are
> just some kind of irqchip in this regard, we reuse the irqchip
> infrastructure in the kernel for all GPIO drivers that generate
> "events" in response to state transitions on digital lines.
One potential problem I see with this is that Kent's proposal, if I
understand correctly, would supplant the original IRQ of a device with
the GTE IRQ for the corresponding event. I'm not sure that's desirable
because that would require modifying the device tree and would no longer
accurately represent the hardware. Timestamping also sounds like
something that drivers would want to opt into, and requiring people to
update the device tree to achieve this just doesn't seem reasonable.
This proposal would also only work if there's a 1:1 correspondence
between hardware IRQ and GTE IRQ. However, as Dipen mentioned, the GTE
events can be configured with a threshold, so a GTE IRQ might only
trigger every, say, 5th hardware IRQ. I'm not sure if those are common
use-cases, though.
Obviously if we don't integrate this with IRQs directly, it becomes a
bit more difficult to relate the captured timestamps to the events
across subsystem boundaries. I'm not sure how this would be solved
properly. If the events are sufficiently rare, and it's certain that
none will be missed, then it should be possible to just pull a timestamp
from the timestamp FIFO for each event.
All of that said, I wonder if perhaps hierarchical IRQ domains can
somehow be used for this. We did something similar on Tegra not too long
ago for wake events, which are basically IRQs exposed by a parent IRQ
chip that allows waking up from system sleep. There are some
similarities between that and GTE in that the wake events also map to a
subset of GPIOs and IRQs and provide additional functionalities on top.
I managed to mess up the implementation and Marc stepped in to clean
things up, so Cc'ing him since he's clearly more familiar with the topic
than I am.
Thierry
On Tue, Mar 23, 2021 at 10:03:18AM +0100, Thierry Reding wrote:
> I agree. My understanding is the the TSC is basically an SoC-wide clock
> that can be (and is) used by several hardware blocks. There's an
> interface for software to read out the value, but it's part of a block
> called TKE (time-keeping engine, if I recall correctly) that implements
> various clock sources and watchdog functionality.
...
> Anyway, I think given that the GTE doesn't provide that clock itself but
> rather just a means of taking a snapshot of that clock and stamping
> certain events with that, it makes more sense to provide that clock from
> the TKE driver.
It sounds like TKE + GTE together act like a PHC, and GTE doesn't
need/want its own SW interface.
Thanks,
Richard
On 3/23/21 2:08 AM, Linus Walleij wrote:
> On Mon, Mar 22, 2021 at 9:17 PM Dipen Patel <[email protected]> wrote:
>
>> My follow-up concerns on both Linus's and Kent's feedback:
>>
>> 1. Please correct me if I am wrong, lineevent in the gpiolib* is only
>> serves the userspace clients.
>> 1.a What about kernel drivers wanting to use this feature for monitoring its
>> GPIO lines, see gyroscope example somewhere below. In that regards,
>> lineevent implementation is not sufficient.
>> 1.b Are you also implying to extend lineevent implementation to kernel
>> drivers?
>
> I was talking about lineevent because you mentioned things like
> motors and robotics, and those things are traditionally not run in
> kernelspace because they are not generic hardware that fit in the
> kernel subsystems.
>
> Normally industrial automatic control tasks are run in a userspace
> thread with some realtime priority.
>
I mentioned those two use cases as illustration purpose as GTE is not just
restricted to robotics and vehicles. I agree that those applications run
mostly from userspace with RT priority but there most certainly some
kernel drivers they interact and it may want to use GTE, for example,
BMI088 devices mostly used in drones and robotics, it could be extended to
use GTE for its GPIO hw timestamping, GPIO is used to indicate data ready.
> As Kent says, in-kernel events are exclusively using IRQ as
> mechanism, and should be modeled as IRQs. Then the question
> is how you join the timestamp with the IRQ. GPIO chips are
> just some kind of irqchip in this regard, we reuse the irqchip
> infrastructure in the kernel for all GPIO drivers that generate
> "events" in response to state transitions on digital lines.
>
>>>> And certainly you will also want to use this timestamp for
>>>> IIO devices? If it is just GPIOs and IRQs today, it will be
>>>> gyroscopes and accelerometers tomorrow, am I right?
>>>>
>>
>> Gyroscope, accelerometers or any IIO are built on top of i2c/spi and/or GPIOs.
>> So they are covered as long as they serve as client to GTE framework, For
>> example, if gyroscope uses GPIO as an interrupt to indicate frame
>> ready, GTE could timestamp that GPIO as well any IRQs like i2c transaction
>> complete IRQ. To this to happen, gycroscope then register itself with
>> GTE framework and enable required signals that it interfaces/interested with.
>
> I think there are IIO devices that provide their own
> hardware timestamp and as such they might want to use that,
> so the mechanism need to be generic enough that a certain
> hardware timestamp can be selected sooner or later.
> But let's not overcomplicate things for now.
>
I agree, above BMI088 has its own timestamping engine. I have to look into
that aspect for bringing its TS engine into GTE framework as one of the possible
off-chip provides besides in-chip GTEs. We can defer that part for later. Thanks
for pointing that out.
> Yours,
> Linus Walleij
>
On 3/23/21 3:06 AM, Thierry Reding wrote:
> On Tue, Mar 23, 2021 at 10:08:00AM +0100, Linus Walleij wrote:
>> On Mon, Mar 22, 2021 at 9:17 PM Dipen Patel <[email protected]> wrote:
>>
>>> My follow-up concerns on both Linus's and Kent's feedback:
>>>
>>> 1. Please correct me if I am wrong, lineevent in the gpiolib* is only
>>> serves the userspace clients.
>>> 1.a What about kernel drivers wanting to use this feature for monitoring its
>>> GPIO lines, see gyroscope example somewhere below. In that regards,
>>> lineevent implementation is not sufficient.
>>> 1.b Are you also implying to extend lineevent implementation to kernel
>>> drivers?
>>
>> I was talking about lineevent because you mentioned things like
>> motors and robotics, and those things are traditionally not run in
>> kernelspace because they are not generic hardware that fit in the
>> kernel subsystems.
>>
>> Normally industrial automatic control tasks are run in a userspace
>> thread with some realtime priority.
>>
>> As Kent says, in-kernel events are exclusively using IRQ as
>> mechanism, and should be modeled as IRQs. Then the question
>> is how you join the timestamp with the IRQ. GPIO chips are
>> just some kind of irqchip in this regard, we reuse the irqchip
>> infrastructure in the kernel for all GPIO drivers that generate
>> "events" in response to state transitions on digital lines.
>
> One potential problem I see with this is that Kent's proposal, if I
> understand correctly, would supplant the original IRQ of a device with
> the GTE IRQ for the corresponding event. I'm not sure that's desirable
> because that would require modifying the device tree and would no longer
> accurately represent the hardware. Timestamping also sounds like
> something that drivers would want to opt into, and requiring people to
> update the device tree to achieve this just doesn't seem reasonable.
>
> This proposal would also only work if there's a 1:1 correspondence
> between hardware IRQ and GTE IRQ. However, as Dipen mentioned, the GTE
> events can be configured with a threshold, so a GTE IRQ might only
> trigger every, say, 5th hardware IRQ. I'm not sure if those are common
> use-cases, though.
>
> Obviously if we don't integrate this with IRQs directly, it becomes a
> bit more difficult to relate the captured timestamps to the events
> across subsystem boundaries. I'm not sure how this would be solved
> properly. If the events are sufficiently rare, and it's certain that
> none will be missed, then it should be possible to just pull a timestamp
> from the timestamp FIFO for each event.
>
Just to clarify, I am getting impression that GTE is viewed or made to be
viewed as "event" generating device, which it is not. You can consider GTE
as "person in a middle" type of device which can monitor configured events
and on seeing state change, it will just record timestamp and store it.
I agree with Thierry's point.
> All of that said, I wonder if perhaps hierarchical IRQ domains can
> somehow be used for this. We did something similar on Tegra not too long
> ago for wake events, which are basically IRQs exposed by a parent IRQ
> chip that allows waking up from system sleep. There are some
> similarities between that and GTE in that the wake events also map to a
> subset of GPIOs and IRQs and provide additional functionalities on top.
>
> I managed to mess up the implementation and Marc stepped in to clean
> things up, so Cc'ing him since he's clearly more familiar with the topic
> than I am.
>
> Thierry
>
On 3/23/21 3:06 AM, Thierry Reding wrote:
> On Tue, Mar 23, 2021 at 10:08:00AM +0100, Linus Walleij wrote:
>> On Mon, Mar 22, 2021 at 9:17 PM Dipen Patel <[email protected]> wrote:
>>
>>> My follow-up concerns on both Linus's and Kent's feedback:
>>>
>>> 1. Please correct me if I am wrong, lineevent in the gpiolib* is only
>>> serves the userspace clients.
>>> 1.a What about kernel drivers wanting to use this feature for monitoring its
>>> GPIO lines, see gyroscope example somewhere below. In that regards,
>>> lineevent implementation is not sufficient.
>>> 1.b Are you also implying to extend lineevent implementation to kernel
>>> drivers?
>>
>> I was talking about lineevent because you mentioned things like
>> motors and robotics, and those things are traditionally not run in
>> kernelspace because they are not generic hardware that fit in the
>> kernel subsystems.
>>
>> Normally industrial automatic control tasks are run in a userspace
>> thread with some realtime priority.
>>
>> As Kent says, in-kernel events are exclusively using IRQ as
>> mechanism, and should be modeled as IRQs. Then the question
>> is how you join the timestamp with the IRQ. GPIO chips are
>> just some kind of irqchip in this regard, we reuse the irqchip
>> infrastructure in the kernel for all GPIO drivers that generate
>> "events" in response to state transitions on digital lines.
>
> One potential problem I see with this is that Kent's proposal, if I
> understand correctly, would supplant the original IRQ of a device with
> the GTE IRQ for the corresponding event. I'm not sure that's desirable
> because that would require modifying the device tree and would no longer
> accurately represent the hardware. Timestamping also sounds like
> something that drivers would want to opt into, and requiring people to
> update the device tree to achieve this just doesn't seem reasonable.
>
> This proposal would also only work if there's a 1:1 correspondence
> between hardware IRQ and GTE IRQ. However, as Dipen mentioned, the GTE
> events can be configured with a threshold, so a GTE IRQ might only
> trigger every, say, 5th hardware IRQ. I'm not sure if those are common
> use-cases, though.
>
> Obviously if we don't integrate this with IRQs directly, it becomes a
> bit more difficult to relate the captured timestamps to the events
> across subsystem boundaries. I'm not sure how this would be solved
> properly. If the events are sufficiently rare, and it's certain that
> none will be missed, then it should be possible to just pull a timestamp
> from the timestamp FIFO for each event.
>
> All of that said, I wonder if perhaps hierarchical IRQ domains can
> somehow be used for this. We did something similar on Tegra not too long
> ago for wake events, which are basically IRQs exposed by a parent IRQ
> chip that allows waking up from system sleep. There are some
> similarities between that and GTE in that the wake events also map to a
> subset of GPIOs and IRQs and provide additional functionalities on top.
>
Possibly similarities just ends there, since these wakes are actually an
events, it would make sense for them to be implemented in hierarchical IRQ
domains. GTE does not generate event or be a cause of generating any event
besides its own IRQ.
> I managed to mess up the implementation and Marc stepped in to clean
> things up, so Cc'ing him since he's clearly more familiar with the topic
> than I am.
>
> Thierry
>
On Tue, 23 Mar 2021 10:06:39 +0000,
Thierry Reding <[email protected]> wrote:
[...]
> Obviously if we don't integrate this with IRQs directly, it becomes a
> bit more difficult to relate the captured timestamps to the events
> across subsystem boundaries. I'm not sure how this would be solved
> properly. If the events are sufficiently rare, and it's certain that
> none will be missed, then it should be possible to just pull a timestamp
> from the timestamp FIFO for each event.
>
> All of that said, I wonder if perhaps hierarchical IRQ domains can
> somehow be used for this. We did something similar on Tegra not too long
> ago for wake events, which are basically IRQs exposed by a parent IRQ
> chip that allows waking up from system sleep. There are some
> similarities between that and GTE in that the wake events also map to a
> subset of GPIOs and IRQs and provide additional functionalities on top.
>
> I managed to mess up the implementation and Marc stepped in to clean
> things up, so Cc'ing him since he's clearly more familiar with the topic
> than I am.
Sure, but I'm pretty clueless when it comes to what this GTE thing
does (it has a fast car ring to it, which isn't a selling point for
me... ;-).
If, as I understand it, it is supposed to collect timestamps on
signalling of IRQs, you could make it part of the kernel's view of the
interrupt path by "pushing" a domain on top of the IRQ stack,
triggering the configuration/timestamping of this interrupt.
What is completely unclear to me is how you extract information from
it. The IRQ doesn't really give you an interface to extract a lot of
information aside from an interrupt count and what is defined as the
interrupt state. A timestamp doesn't really count as state, so you'd
need to invent something new here.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.