The atlas sensor driver currently registers a threaded IRQ handler whose
sole responsibility is to trigger an irq_work which will in turn run
iio_trigger_poll() in IRQ context.
This seems overkill given the fact that there already was a opportunity
to run iio_trigger_poll() in IRQ context in the top half of the IRQ
handler. So make use of it, ultimately avoiding a context switch, an
IPI, and reducing latency.
Signed-off-by: Nicolas Saenz Julienne <[email protected]>
---
NOTE: This was only compile tested. I don't know much about iio_triggers
(or iio for that matter), but while reviewing irq_work usage this showed
up and seemed trivial enough to fix right away. There might be a subtle
reason why this is setup as such, but it would at least warrant a
comment.
drivers/iio/chemical/atlas-sensor.c | 15 ++-------------
1 file changed, 2 insertions(+), 13 deletions(-)
diff --git a/drivers/iio/chemical/atlas-sensor.c b/drivers/iio/chemical/atlas-sensor.c
index 9cb99585b6ff..710daa169d57 100644
--- a/drivers/iio/chemical/atlas-sensor.c
+++ b/drivers/iio/chemical/atlas-sensor.c
@@ -13,7 +13,6 @@
#include <linux/mutex.h>
#include <linux/err.h>
#include <linux/irq.h>
-#include <linux/irq_work.h>
#include <linux/i2c.h>
#include <linux/mod_devicetable.h>
#include <linux/regmap.h>
@@ -89,7 +88,6 @@ struct atlas_data {
struct iio_trigger *trig;
struct atlas_device *chip;
struct regmap *regmap;
- struct irq_work work;
unsigned int interrupt_enabled;
/* 96-bit data + 32-bit pad + 64-bit timestamp */
__be32 buffer[6] __aligned(8);
@@ -442,13 +440,6 @@ static const struct iio_buffer_setup_ops atlas_buffer_setup_ops = {
.predisable = atlas_buffer_predisable,
};
-static void atlas_work_handler(struct irq_work *work)
-{
- struct atlas_data *data = container_of(work, struct atlas_data, work);
-
- iio_trigger_poll(data->trig);
-}
-
static irqreturn_t atlas_trigger_handler(int irq, void *private)
{
struct iio_poll_func *pf = private;
@@ -474,7 +465,7 @@ static irqreturn_t atlas_interrupt_handler(int irq, void *private)
struct iio_dev *indio_dev = private;
struct atlas_data *data = iio_priv(indio_dev);
- irq_work_queue(&data->work);
+ iio_trigger_poll(data->trig);
return IRQ_HANDLED;
}
@@ -677,12 +668,10 @@ static int atlas_probe(struct i2c_client *client,
goto unregister_trigger;
}
- init_irq_work(&data->work, atlas_work_handler);
-
if (client->irq > 0) {
/* interrupt pin toggles on new conversion */
ret = devm_request_threaded_irq(&client->dev, client->irq,
- NULL, atlas_interrupt_handler,
+ atlas_interrupt_handler, NULL,
IRQF_TRIGGER_RISING |
IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
"atlas_irq",
--
2.31.1
On Thu, Jun 24, 2021 at 1:01 PM Nicolas Saenz Julienne
<[email protected]> wrote:
>
> The atlas sensor driver currently registers a threaded IRQ handler whose
> sole responsibility is to trigger an irq_work which will in turn run
> iio_trigger_poll() in IRQ context.
>
> This seems overkill given the fact that there already was a opportunity
an opportunity
> to run iio_trigger_poll() in IRQ context in the top half of the IRQ
> handler. So make use of it, ultimately avoiding a context switch, an
> IPI, and reducing latency.
...
> @@ -474,7 +465,7 @@ static irqreturn_t atlas_interrupt_handler(int irq, void *private)
> struct iio_dev *indio_dev = private;
> struct atlas_data *data = iio_priv(indio_dev);
>
> - irq_work_queue(&data->work);
> + iio_trigger_poll(data->trig);
Have you considered dropping atlas_interrupt_trigger_ops() altogether?
> return IRQ_HANDLED;
...
> if (client->irq > 0) {
> /* interrupt pin toggles on new conversion */
> ret = devm_request_threaded_irq(&client->dev, client->irq,
> - NULL, atlas_interrupt_handler,
> + atlas_interrupt_handler, NULL,
So, you move it from threaded IRQ to be a hard IRQ handler (we have a
separate call for this).
Can you guarantee that handling of those events will be fast enough?
> IRQF_TRIGGER_RISING |
> IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
> "atlas_irq",
--
With Best Regards,
Andy Shevchenko
Hi Andy, thanks for the review.
On Thu, 2021-06-24 at 13:39 +0300, Andy Shevchenko wrote:
> On Thu, Jun 24, 2021 at 1:01 PM Nicolas Saenz Julienne
> <[email protected]> wrote:
> >
> > The atlas sensor driver currently registers a threaded IRQ handler whose
> > sole responsibility is to trigger an irq_work which will in turn run
> > iio_trigger_poll() in IRQ context.
> >
> > This seems overkill given the fact that there already was a opportunity
>
> an opportunity
Thanks, noted.
> > @@ -474,7 +465,7 @@ static irqreturn_t atlas_interrupt_handler(int irq, void *private)
> > struct iio_dev *indio_dev = private;
> > struct atlas_data *data = iio_priv(indio_dev);
> >
> > - irq_work_queue(&data->work);
> > + iio_trigger_poll(data->trig);
>
> Have you considered dropping atlas_interrupt_trigger_ops() altogether?
Not really, but it makes sense as a separate patch. I'll take care of it.
>
> > if (client->irq > 0) {
> > /* interrupt pin toggles on new conversion */
> > ret = devm_request_threaded_irq(&client->dev, client->irq,
>
> > - NULL, atlas_interrupt_handler,
> > + atlas_interrupt_handler, NULL,
>
> So, you move it from threaded IRQ to be a hard IRQ handler (we have a
> separate call for this).
Noted.
> Can you guarantee that handling of those events will be fast enough?
Do you mean the events triggered in iio_trigger_poll()? If so the amount of
time spent in IRQ context is going to be the same regardless of whether it's
handled through atlas' IRQ or later in irq_work IPI (or softirq context on some
weird platforms).
--
Nicolás Sáenz
On Thu, 24 Jun 2021 13:13:47 +0200
Nicolas Saenz Julienne <[email protected]> wrote:
> Hi Andy, thanks for the review.
>
> On Thu, 2021-06-24 at 13:39 +0300, Andy Shevchenko wrote:
> > On Thu, Jun 24, 2021 at 1:01 PM Nicolas Saenz Julienne
> > <[email protected]> wrote:
> > >
> > > The atlas sensor driver currently registers a threaded IRQ handler whose
> > > sole responsibility is to trigger an irq_work which will in turn run
> > > iio_trigger_poll() in IRQ context.
> > >
> > > This seems overkill given the fact that there already was a opportunity
> >
> > an opportunity
>
> Thanks, noted.
>
> > > @@ -474,7 +465,7 @@ static irqreturn_t atlas_interrupt_handler(int irq, void *private)
> > > struct iio_dev *indio_dev = private;
> > > struct atlas_data *data = iio_priv(indio_dev);
> > >
> > > - irq_work_queue(&data->work);
> > > + iio_trigger_poll(data->trig);
> >
> > Have you considered dropping atlas_interrupt_trigger_ops() altogether?
>
> Not really, but it makes sense as a separate patch. I'll take care of it.
>
> >
> > > if (client->irq > 0) {
> > > /* interrupt pin toggles on new conversion */
> > > ret = devm_request_threaded_irq(&client->dev, client->irq,
> >
> > > - NULL, atlas_interrupt_handler,
> > > + atlas_interrupt_handler, NULL,
> >
> > So, you move it from threaded IRQ to be a hard IRQ handler (we have a
> > separate call for this).
>
> Noted.
>
> > Can you guarantee that handling of those events will be fast enough?
>
> Do you mean the events triggered in iio_trigger_poll()? If so the amount of
> time spent in IRQ context is going to be the same regardless of whether it's
> handled through atlas' IRQ or later in irq_work IPI (or softirq context on some
> weird platforms).
>
Hi Nicolas, Andy, Matt,
Just been checking patchwork for IIO and noted that this one is still outstanding.
My reading of above is that we kind of got to a conclusion - though I'd like
Matt to sanity check the patch (and maybe test it if he still has hardware for
this?)
We have a generic form of this handler that may let you drop the atlas_interrupt_handler()
function entirely iio_trigger_generic_data_rdy_poll().
https://elixir.bootlin.com/linux/latest/source/drivers/iio/industrialio-trigger.c#L182
Thanks,
Jonathan