v1 patchset subject was "Fix serial console for PREEMPT_RT".
While dealing with a bug that breaks the serial console from qemu (8250)
after printing a lot of data, I found some issues on this driver on
PREEMPT_RT kernel due to it enabling force_irqthreads, ending up causing
the IRQ to be disabled and the serial console to go down.
I found out 8250 driver get an IRQ request for every tx byte, but the
handler is able to deal with sending multiple bytes per "thread wake up".
If the thread don't get to run that often, and end up handling many irqs at
once, it will cause few changes in threads_handled to be noticed, meaning
only a few runs of note_interrupt will not increment irqs_unhandled, which
leads to disabling the IRQ.
For serial8250, to trigger IRQ disabling it's only needed a fast printing
of ~300kBytes.
In order to fix this, I propose that we subtract the number of threads
handled since the last note_interrupt from irqs_unhandled:
(irqs_unhandled -= threads_handled - threads_handled_last)
So this way irqs_unhandled actually reflects how many of those irqs
actually haven't been handled.
This work is divided in:
Patch #1: Simple change in SPURIOUS_DEFERRED bit (31 -> 0), so we can
acuratelly measure how many IRQs have been handled even when the
threads_handled & threads_handled_last are close to overflow.
Patch #2: Subtract the diff from irqs_unhandled on note_interrupt.
Patch #3: Implement IRQ_HANDLED_MANY in order to let the handler return how
many interruptions have been handled in a single run, and increment
threads_handled accordingly.
Patch #4: Change serial8250 driver to use IRQ_HANDLED_MANY interface,
so it can inform how many IRQs have been handled, and help set the correct
number of irqs_unhandled, thus avoiding IRQ disabled for normal usage.
Changes since RFCv1:
- Implemented a way of better counting threaded_irqs handled, instead of
only zeroing irqs_unhandled.
- Rebased on top of linux-rt/v6.8-rc4-rt4-rebase, so I don't need to fix
the sleep while atomic issue in serial8250 mainline.
- Better description of what we are actually fixing
- Changed patchset title from "Fix serial console for PREEMPT_RT"
- Link: https://lore.kernel.org/all/[email protected]/
Leonardo Bras (4):
irq: Move spurious_deferred bit from BIT(31) to BIT(0)
irq/spurious: Account for multiple handles in note_interrupt
irq: Introduce IRQ_HANDLED_MANY
tty/serial8250: Make use of IRQ_HANDLED_MANY interface
include/linux/irqdesc.h | 11 +++++-
include/linux/irqreturn.h | 23 +++++++++++-
include/linux/serial_8250.h | 2 +-
drivers/tty/serial/8250/8250_core.c | 13 ++++---
drivers/tty/serial/8250/8250_port.c | 16 +++++----
kernel/irq/chip.c | 10 ++++--
kernel/irq/handle.c | 3 ++
kernel/irq/manage.c | 8 +++--
kernel/irq/spurious.c | 55 ++++++++++++++++++-----------
9 files changed, 102 insertions(+), 39 deletions(-)
base-commit: 63d966ad6fecb66769e56fe2285de1e3b448f2ff
--
2.43.2
Makes sure the threads_handled reserve a bit for that.
This will be useful in the next patch in this series.
Signed-off-by: Leonardo Bras <[email protected]>
---
include/linux/irqdesc.h | 9 +++++++++
kernel/irq/manage.c | 4 ++--
kernel/irq/spurious.c | 6 ++----
3 files changed, 13 insertions(+), 6 deletions(-)
diff --git a/include/linux/irqdesc.h b/include/linux/irqdesc.h
index d9451d456a733..62aff209315fe 100644
--- a/include/linux/irqdesc.h
+++ b/include/linux/irqdesc.h
@@ -251,4 +251,13 @@ irq_set_lockdep_class(unsigned int irq, struct lock_class_key *lock_class,
__irq_set_lockdep_class(irq, lock_class, request_class);
}
+#define SPURIOUS_DEFERRED (0x1)
+#define SPURIOUS_DEFERRED_SHIFT (1)
+
+static inline void irq_add_handled(struct irq_desc *desc, int i)
+{
+ i <<= SPURIOUS_DEFERRED_SHIFT;
+ atomic_add(i, &desc->threads_handled);
+}
+
#endif
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 1782f90cd8c6c..5bc609c7b728c 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -1191,7 +1191,7 @@ irq_forced_thread_fn(struct irq_desc *desc, struct irqaction *action)
local_irq_disable();
ret = action->thread_fn(action->irq, action->dev_id);
if (ret == IRQ_HANDLED)
- atomic_inc(&desc->threads_handled);
+ irq_add_handled(desc, 1);
irq_finalize_oneshot(desc, action);
if (!IS_ENABLED(CONFIG_PREEMPT_RT))
@@ -1212,7 +1212,7 @@ static irqreturn_t irq_thread_fn(struct irq_desc *desc,
ret = action->thread_fn(action->irq, action->dev_id);
if (ret == IRQ_HANDLED)
- atomic_inc(&desc->threads_handled);
+ irq_add_handled(desc, 1);
irq_finalize_oneshot(desc, action);
return ret;
diff --git a/kernel/irq/spurious.c b/kernel/irq/spurious.c
index 02b2daf074414..d92f33b2e31ee 100644
--- a/kernel/irq/spurious.c
+++ b/kernel/irq/spurious.c
@@ -267,8 +267,6 @@ try_misrouted_irq(unsigned int irq, struct irq_desc *desc,
return action && (action->flags & IRQF_IRQPOLL);
}
-#define SPURIOUS_DEFERRED 0x80000000
-
void note_interrupt(struct irq_desc *desc, irqreturn_t action_ret)
{
unsigned int irq;
@@ -312,7 +310,7 @@ void note_interrupt(struct irq_desc *desc, irqreturn_t action_ret)
if (action_ret == IRQ_WAKE_THREAD) {
int handled;
/*
- * We use bit 31 of thread_handled_last to
+ * We use bit 0 of thread_handled_last to
* denote the deferred spurious detection
* active. No locking necessary as
* thread_handled_last is only accessed here
@@ -328,7 +326,7 @@ void note_interrupt(struct irq_desc *desc, irqreturn_t action_ret)
* returned IRQ_HANDLED since the last
* interrupt happened.
*
- * For simplicity we just set bit 31, as it is
+ * For simplicity we just set bit 0, as it is
* set in threads_handled_last as well. So we
* avoid extra masking. And we really do not
* care about the high bits of the handled
--
2.43.2
Currently note_interrupt() will check threads_handled for changes and use
it to mark an IRQ as handled, in order to avoid having a threaded
interrupt to falsely trigger unhandled IRQ detection.
This detection can still be falsely triggered if we have many IRQ handled
accounted between each check of threads_handled, as those will be counted
as a single one in the unhandled IRQ detection.
In order to fix this, subtract from irqs_unhandled the number of IRQs
handled since the last check (threads_handled_last - threads_handled).
Signed-off-by: Leonardo Bras <[email protected]>
---
include/linux/irqdesc.h | 2 +-
kernel/irq/spurious.c | 53 ++++++++++++++++++++++++++---------------
2 files changed, 35 insertions(+), 20 deletions(-)
diff --git a/include/linux/irqdesc.h b/include/linux/irqdesc.h
index 62aff209315fe..957ac02e9ec2b 100644
--- a/include/linux/irqdesc.h
+++ b/include/linux/irqdesc.h
@@ -67,7 +67,7 @@ struct irq_desc {
unsigned long last_unhandled; /* Aging timer for unhandled count */
unsigned int irqs_unhandled;
atomic_t threads_handled;
- int threads_handled_last;
+ unsigned int threads_handled_last;
raw_spinlock_t lock;
struct cpumask *percpu_enabled;
const struct cpumask *percpu_affinity;
diff --git a/kernel/irq/spurious.c b/kernel/irq/spurious.c
index d92f33b2e31ee..4e8e2727b8beb 100644
--- a/kernel/irq/spurious.c
+++ b/kernel/irq/spurious.c
@@ -267,6 +267,28 @@ try_misrouted_irq(unsigned int irq, struct irq_desc *desc,
return action && (action->flags & IRQF_IRQPOLL);
}
+static inline int get_handled_diff(struct irq_desc *desc)
+{
+ unsigned int handled;
+ int diff;
+
+ handled = (unsigned int)atomic_read(&desc->threads_handled);
+ handled |= SPURIOUS_DEFERRED;
+
+ diff = handled - desc->threads_handled_last;
+ diff >>= SPURIOUS_DEFERRED_SHIFT;
+
+ /*
+ * Note: We keep the SPURIOUS_DEFERRED bit set. We are handling the
+ * previous invocation right now. Keep it for the current one, so the
+ * next hardware interrupt will account for it.
+ */
+ if (diff != 0)
+ desc->threads_handled_last = handled;
+
+ return diff;
+}
+
void note_interrupt(struct irq_desc *desc, irqreturn_t action_ret)
{
unsigned int irq;
@@ -308,7 +330,7 @@ void note_interrupt(struct irq_desc *desc, irqreturn_t action_ret)
* interrupt.
*/
if (action_ret == IRQ_WAKE_THREAD) {
- int handled;
+ int diff;
/*
* We use bit 0 of thread_handled_last to
* denote the deferred spurious detection
@@ -325,27 +347,20 @@ void note_interrupt(struct irq_desc *desc, irqreturn_t action_ret)
* Check whether one of the threaded handlers
* returned IRQ_HANDLED since the last
* interrupt happened.
- *
- * For simplicity we just set bit 0, as it is
- * set in threads_handled_last as well. So we
- * avoid extra masking. And we really do not
- * care about the high bits of the handled
- * count. We just care about the count being
- * different than the one we saw before.
*/
- handled = atomic_read(&desc->threads_handled);
- handled |= SPURIOUS_DEFERRED;
- if (handled != desc->threads_handled_last) {
- action_ret = IRQ_HANDLED;
+ diff = get_handled_diff(desc);
+ if (diff > 0) {
/*
- * Note: We keep the SPURIOUS_DEFERRED
- * bit set. We are handling the
- * previous invocation right now.
- * Keep it for the current one, so the
- * next hardware interrupt will
- * account for it.
+ * Subtract from irqs_unhandled the number of
+ * IRQs handled since the last change in
+ * threads_handled.
*/
- desc->threads_handled_last = handled;
+ if (diff < desc->irqs_unhandled)
+ desc->irqs_unhandled -= diff;
+ else
+ desc->irqs_unhandled = 0;
+
+ action_ret = IRQ_HANDLED;
} else {
/*
* None of the threaded handlers felt
--
2.43.2
In threaded IRQs, some irq handlers are able to handle many requests at a
single run, but this is only accounted as a single IRQ_HANDLED when
increasing threads_handled.
In order to fix this, introduce IRQ_HANDLED_MANY, so the returned value of
those IRQ handlers are able to signal that many IRQs got handled at that
run.
Is scenarios where there is no need to keep track of IRQ handled, convert
it back to IRQ_HANDLED.
Signed-off-by: Leonardo Bras <[email protected]>
---
include/linux/irqreturn.h | 23 ++++++++++++++++++++++-
kernel/irq/chip.c | 10 ++++++++--
kernel/irq/handle.c | 3 +++
kernel/irq/manage.c | 4 ++++
4 files changed, 37 insertions(+), 3 deletions(-)
diff --git a/include/linux/irqreturn.h b/include/linux/irqreturn.h
index d426c7ad92bfd..204030696676c 100644
--- a/include/linux/irqreturn.h
+++ b/include/linux/irqreturn.h
@@ -7,14 +7,35 @@
* @IRQ_NONE: interrupt was not from this device or was not handled
* @IRQ_HANDLED: interrupt was handled by this device
* @IRQ_WAKE_THREAD: handler requests to wake the handler thread
+ * @IRQ_HANDLED_MANY: interrupt was handled by this device multiple times
+ * should be the only bit set in the first 3 bits, and
+ * carry a count > 1 in the next bits.
*/
enum irqreturn {
IRQ_NONE = (0 << 0),
IRQ_HANDLED = (1 << 0),
IRQ_WAKE_THREAD = (1 << 1),
+ IRQ_HANDLED_MANY = (1 << 2),
+ IRQ_RETMASK = IRQ_HANDLED | IRQ_WAKE_THREAD | IRQ_HANDLED_MANY,
};
-typedef enum irqreturn irqreturn_t;
+#define IRQ_HANDLED_MANY_SHIFT (3)
+
+typedef int irqreturn_t;
#define IRQ_RETVAL(x) ((x) ? IRQ_HANDLED : IRQ_NONE)
+#define IRQ_RETVAL_MANY(x) \
+({ \
+ __typeof__(x) __x = (x); \
+ irqreturn_t __ret; \
+ if (__x == 0) \
+ __ret = IRQ_NONE; \
+ else if (__x == 1) \
+ __ret = IRQ_HANDLED; \
+ else \
+ __ret = IRQ_HANDLED_MANY | (__x << IRQ_HANDLED_MANY_SHIFT); \
+ __ret; \
+})
+
+#define IRQ_HANDLED_MANY_GET(x) ((x) >> IRQ_HANDLED_MANY_SHIFT)
#endif
diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c
index dc94e0bf2c940..233cf22a5b771 100644
--- a/kernel/irq/chip.c
+++ b/kernel/irq/chip.c
@@ -482,8 +482,14 @@ void handle_nested_irq(unsigned int irq)
raw_spin_unlock_irq(&desc->lock);
action_ret = IRQ_NONE;
- for_each_action_of_desc(desc, action)
- action_ret |= action->thread_fn(action->irq, action->dev_id);
+ for_each_action_of_desc(desc, action) {
+ irqreturn_t ret = action->thread_fn(action->irq, action->dev_id);
+
+ if ((ret & IRQ_RETMASK) == IRQ_HANDLED_MANY)
+ ret = IRQ_HANDLED;
+
+ action_ret |= ret;
+ }
if (!irq_settings_no_debug(desc))
note_interrupt(desc, action_ret);
diff --git a/kernel/irq/handle.c b/kernel/irq/handle.c
index 9489f93b3db34..bfc6e3e43045a 100644
--- a/kernel/irq/handle.c
+++ b/kernel/irq/handle.c
@@ -162,6 +162,9 @@ irqreturn_t __handle_irq_event_percpu(struct irq_desc *desc)
irq, action->handler))
local_irq_disable();
+ if ((res & IRQ_RETMASK) == IRQ_HANDLED_MANY)
+ res = IRQ_HANDLED;
+
switch (res) {
case IRQ_WAKE_THREAD:
/*
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 5bc609c7b728c..e684c7107ff90 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -1192,6 +1192,8 @@ irq_forced_thread_fn(struct irq_desc *desc, struct irqaction *action)
ret = action->thread_fn(action->irq, action->dev_id);
if (ret == IRQ_HANDLED)
irq_add_handled(desc, 1);
+ else if ((ret & IRQ_RETMASK) == IRQ_HANDLED_MANY)
+ irq_add_handled(desc, IRQ_HANDLED_MANY_GET(ret));
irq_finalize_oneshot(desc, action);
if (!IS_ENABLED(CONFIG_PREEMPT_RT))
@@ -1213,6 +1215,8 @@ static irqreturn_t irq_thread_fn(struct irq_desc *desc,
ret = action->thread_fn(action->irq, action->dev_id);
if (ret == IRQ_HANDLED)
irq_add_handled(desc, 1);
+ else if ((ret & IRQ_RETMASK) == IRQ_HANDLED_MANY)
+ irq_add_handled(desc, IRQ_HANDLED_MANY_GET(ret));
irq_finalize_oneshot(desc, action);
return ret;
--
2.43.2
For every TX byte an IRQ is requested.
On threaded IRQs, the handler calls serial8250_tx_chars can send multiple
bytes, limited to it's queue size (tx_loadsz).
When this happens, the handler return IRQ_HANDLED with reduces the
unhandled IRQ counter only by 1, even though many requests have been
handled at once.
This causes the unhandled IRQ counter to go up until it reaches the maximum
and causes the registered IRQ to be disabled, thus breaking the serial
console.
Make use of the newly introduced IRQ_HANDLED_MANY interface to return the
number of requests handled, so the unhandled IRQ counter can get decreased
accordingly.
Signed-off-by: Leonardo Bras <[email protected]>
---
include/linux/serial_8250.h | 2 +-
drivers/tty/serial/8250/8250_core.c | 13 ++++++++-----
drivers/tty/serial/8250/8250_port.c | 16 ++++++++++------
3 files changed, 19 insertions(+), 12 deletions(-)
diff --git a/include/linux/serial_8250.h b/include/linux/serial_8250.h
index ec46e3b49ee99..c9d4271b71d70 100644
--- a/include/linux/serial_8250.h
+++ b/include/linux/serial_8250.h
@@ -200,7 +200,7 @@ int fsl8250_handle_irq(struct uart_port *port);
int serial8250_handle_irq(struct uart_port *port, unsigned int iir);
u16 serial8250_rx_chars(struct uart_8250_port *up, u16 lsr);
void serial8250_read_char(struct uart_8250_port *up, u16 lsr);
-void serial8250_tx_chars(struct uart_8250_port *up);
+int serial8250_tx_chars(struct uart_8250_port *up);
unsigned int serial8250_modem_status(struct uart_8250_port *up);
void serial8250_init_port(struct uart_8250_port *up);
void serial8250_set_defaults(struct uart_8250_port *up);
diff --git a/drivers/tty/serial/8250/8250_core.c b/drivers/tty/serial/8250/8250_core.c
index ae637155fe7cd..2fab9102eec45 100644
--- a/drivers/tty/serial/8250/8250_core.c
+++ b/drivers/tty/serial/8250/8250_core.c
@@ -110,7 +110,7 @@ static irqreturn_t serial8250_interrupt(int irq, void *dev_id)
{
struct irq_info *i = dev_id;
struct list_head *l, *end = NULL;
- int pass_counter = 0, handled = 0;
+ int pass_counter = 0, handled_total = 0;
pr_debug("%s(%d): start\n", __func__, irq);
@@ -120,15 +120,18 @@ static irqreturn_t serial8250_interrupt(int irq, void *dev_id)
do {
struct uart_8250_port *up;
struct uart_port *port;
+ int handled;
up = list_entry(l, struct uart_8250_port, list);
port = &up->port;
- if (port->handle_irq(port)) {
- handled = 1;
+ handled = port->handle_irq(port);
+ if (handled) {
+ handled_total += handled;
end = NULL;
- } else if (end == NULL)
+ } else if (end == NULL) {
end = l;
+ }
l = l->next;
@@ -140,7 +143,7 @@ static irqreturn_t serial8250_interrupt(int irq, void *dev_id)
pr_debug("%s(%d): end\n", __func__, irq);
- return IRQ_RETVAL(handled);
+ return IRQ_RETVAL_MANY(handled_total);
}
/*
diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
index f799c34f1603c..74d53507a73d4 100644
--- a/drivers/tty/serial/8250/8250_port.c
+++ b/drivers/tty/serial/8250/8250_port.c
@@ -1802,7 +1802,7 @@ u16 serial8250_rx_chars(struct uart_8250_port *up, u16 lsr)
}
EXPORT_SYMBOL_GPL(serial8250_rx_chars);
-void serial8250_tx_chars(struct uart_8250_port *up)
+int serial8250_tx_chars(struct uart_8250_port *up)
{
struct uart_port *port = &up->port;
struct circ_buf *xmit = &port->state->xmit;
@@ -1810,15 +1810,15 @@ void serial8250_tx_chars(struct uart_8250_port *up)
if (port->x_char) {
uart_xchar_out(port, UART_TX);
- return;
+ return 0;
}
if (uart_tx_stopped(port)) {
serial8250_stop_tx(port);
- return;
+ return 0;
}
if (uart_circ_empty(xmit)) {
__stop_tx(up);
- return;
+ return 0;
}
count = up->tx_loadsz;
@@ -1858,6 +1858,9 @@ void serial8250_tx_chars(struct uart_8250_port *up)
*/
if (uart_circ_empty(xmit) && !(up->capabilities & UART_CAP_RPM))
__stop_tx(up);
+
+ /* Return number of chars sent */
+ return up->tx_loadsz - count;
}
EXPORT_SYMBOL_GPL(serial8250_tx_chars);
@@ -1923,6 +1926,7 @@ int serial8250_handle_irq(struct uart_port *port, unsigned int iir)
bool skip_rx = false;
unsigned long flags;
u16 status;
+ int handled = 0;
if (iir & UART_IIR_NO_INT)
return 0;
@@ -1956,14 +1960,14 @@ int serial8250_handle_irq(struct uart_port *port, unsigned int iir)
serial8250_modem_status(up);
if ((status & UART_LSR_THRE) && (up->ier & UART_IER_THRI)) {
if (!up->dma || up->dma->tx_err)
- serial8250_tx_chars(up);
+ handled = serial8250_tx_chars(up);
else if (!up->dma->tx_running)
__stop_tx(up);
}
uart_unlock_and_check_sysrq_irqrestore(port, flags);
- return 1;
+ return handled ? : 1;
}
EXPORT_SYMBOL_GPL(serial8250_handle_irq);
--
2.43.2
On Fri, 16 Feb 2024, Leonardo Bras wrote:
> For every TX byte an IRQ is requested.
> On threaded IRQs, the handler calls serial8250_tx_chars can send multiple
> bytes, limited to it's queue size (tx_loadsz).
Perhaps I'm missing something here but I don't understand what this tries
to say.
- 8250 driver gets TX empty IRQ
- We write x bytes to FIFO
- UART blasts those bits to wire, eventually emptying FIFO
- We get the next TX empty IRQ
What in this makes "for every TX byte an IRQ is requested" true? There's
one IRQ only for every x bytes TX'ed as far as I can tell!?!
--
i.
> When this happens, the handler return IRQ_HANDLED with reduces the
> unhandled IRQ counter only by 1, even though many requests have been
> handled at once.
>
> This causes the unhandled IRQ counter to go up until it reaches the maximum
> and causes the registered IRQ to be disabled, thus breaking the serial
> console.
>
> Make use of the newly introduced IRQ_HANDLED_MANY interface to return the
> number of requests handled, so the unhandled IRQ counter can get decreased
> accordingly.
>
> Signed-off-by: Leonardo Bras <[email protected]>
> ---
> include/linux/serial_8250.h | 2 +-
> drivers/tty/serial/8250/8250_core.c | 13 ++++++++-----
> drivers/tty/serial/8250/8250_port.c | 16 ++++++++++------
> 3 files changed, 19 insertions(+), 12 deletions(-)
>
> diff --git a/include/linux/serial_8250.h b/include/linux/serial_8250.h
> index ec46e3b49ee99..c9d4271b71d70 100644
> --- a/include/linux/serial_8250.h
> +++ b/include/linux/serial_8250.h
> @@ -200,7 +200,7 @@ int fsl8250_handle_irq(struct uart_port *port);
> int serial8250_handle_irq(struct uart_port *port, unsigned int iir);
> u16 serial8250_rx_chars(struct uart_8250_port *up, u16 lsr);
> void serial8250_read_char(struct uart_8250_port *up, u16 lsr);
> -void serial8250_tx_chars(struct uart_8250_port *up);
> +int serial8250_tx_chars(struct uart_8250_port *up);
> unsigned int serial8250_modem_status(struct uart_8250_port *up);
> void serial8250_init_port(struct uart_8250_port *up);
> void serial8250_set_defaults(struct uart_8250_port *up);
> diff --git a/drivers/tty/serial/8250/8250_core.c b/drivers/tty/serial/8250/8250_core.c
> index ae637155fe7cd..2fab9102eec45 100644
> --- a/drivers/tty/serial/8250/8250_core.c
> +++ b/drivers/tty/serial/8250/8250_core.c
> @@ -110,7 +110,7 @@ static irqreturn_t serial8250_interrupt(int irq, void *dev_id)
> {
> struct irq_info *i = dev_id;
> struct list_head *l, *end = NULL;
> - int pass_counter = 0, handled = 0;
> + int pass_counter = 0, handled_total = 0;
>
> pr_debug("%s(%d): start\n", __func__, irq);
>
> @@ -120,15 +120,18 @@ static irqreturn_t serial8250_interrupt(int irq, void *dev_id)
> do {
> struct uart_8250_port *up;
> struct uart_port *port;
> + int handled;
>
> up = list_entry(l, struct uart_8250_port, list);
> port = &up->port;
>
> - if (port->handle_irq(port)) {
> - handled = 1;
> + handled = port->handle_irq(port);
> + if (handled) {
> + handled_total += handled;
> end = NULL;
> - } else if (end == NULL)
> + } else if (end == NULL) {
> end = l;
> + }
>
> l = l->next;
>
> @@ -140,7 +143,7 @@ static irqreturn_t serial8250_interrupt(int irq, void *dev_id)
>
> pr_debug("%s(%d): end\n", __func__, irq);
>
> - return IRQ_RETVAL(handled);
> + return IRQ_RETVAL_MANY(handled_total);
> }
>
> /*
> diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
> index f799c34f1603c..74d53507a73d4 100644
> --- a/drivers/tty/serial/8250/8250_port.c
> +++ b/drivers/tty/serial/8250/8250_port.c
> @@ -1802,7 +1802,7 @@ u16 serial8250_rx_chars(struct uart_8250_port *up, u16 lsr)
> }
> EXPORT_SYMBOL_GPL(serial8250_rx_chars);
>
> -void serial8250_tx_chars(struct uart_8250_port *up)
> +int serial8250_tx_chars(struct uart_8250_port *up)
> {
> struct uart_port *port = &up->port;
> struct circ_buf *xmit = &port->state->xmit;
> @@ -1810,15 +1810,15 @@ void serial8250_tx_chars(struct uart_8250_port *up)
>
> if (port->x_char) {
> uart_xchar_out(port, UART_TX);
> - return;
> + return 0;
> }
> if (uart_tx_stopped(port)) {
> serial8250_stop_tx(port);
> - return;
> + return 0;
> }
> if (uart_circ_empty(xmit)) {
> __stop_tx(up);
> - return;
> + return 0;
> }
>
> count = up->tx_loadsz;
> @@ -1858,6 +1858,9 @@ void serial8250_tx_chars(struct uart_8250_port *up)
> */
> if (uart_circ_empty(xmit) && !(up->capabilities & UART_CAP_RPM))
> __stop_tx(up);
> +
> + /* Return number of chars sent */
> + return up->tx_loadsz - count;
> }
> EXPORT_SYMBOL_GPL(serial8250_tx_chars);
>
> @@ -1923,6 +1926,7 @@ int serial8250_handle_irq(struct uart_port *port, unsigned int iir)
> bool skip_rx = false;
> unsigned long flags;
> u16 status;
> + int handled = 0;
>
> if (iir & UART_IIR_NO_INT)
> return 0;
> @@ -1956,14 +1960,14 @@ int serial8250_handle_irq(struct uart_port *port, unsigned int iir)
> serial8250_modem_status(up);
> if ((status & UART_LSR_THRE) && (up->ier & UART_IER_THRI)) {
> if (!up->dma || up->dma->tx_err)
> - serial8250_tx_chars(up);
> + handled = serial8250_tx_chars(up);
> else if (!up->dma->tx_running)
> __stop_tx(up);
> }
>
> uart_unlock_and_check_sysrq_irqrestore(port, flags);
>
> - return 1;
> + return handled ? : 1;
> }
> EXPORT_SYMBOL_GPL(serial8250_handle_irq);
>
>
On Fri, Feb 16, 2024 at 04:59:44AM -0300, Leonardo Bras wrote:
> Currently note_interrupt() will check threads_handled for changes and use
> it to mark an IRQ as handled, in order to avoid having a threaded
> interrupt to falsely trigger unhandled IRQ detection.
>
> This detection can still be falsely triggered if we have many IRQ handled
> accounted between each check of threads_handled, as those will be counted
> as a single one in the unhandled IRQ detection.
>
> In order to fix this, subtract from irqs_unhandled the number of IRQs
> handled since the last check (threads_handled_last - threads_handled).
..
> +static inline int get_handled_diff(struct irq_desc *desc)
> +{
> + unsigned int handled;
> + int diff;
> +
> + handled = (unsigned int)atomic_read(&desc->threads_handled);
> + handled |= SPURIOUS_DEFERRED;
> +
> + diff = handled - desc->threads_handled_last;
> + diff >>= SPURIOUS_DEFERRED_SHIFT;
> +
> + /*
> + * Note: We keep the SPURIOUS_DEFERRED bit set. We are handling the
> + * previous invocation right now. Keep it for the current one, so the
> + * next hardware interrupt will account for it.
> + */
> + if (diff != 0)
if (diff)
> + desc->threads_handled_last = handled;
> +
> + return diff;
> +}
..
> + diff = get_handled_diff(desc);
> + if (diff > 0) {
diff may not be negative as you always right shift by 1 (or more) bit. Hence
if (diff)
will suffice (also be aligned with the similar check inside the helper) and
making the helper to return unsigned value will be clearer. Am I correct?
--
With Best Regards,
Andy Shevchenko
On Fri, Feb 16, 2024 at 12:12:44PM +0200, Ilpo J?rvinen wrote:
> On Fri, 16 Feb 2024, Leonardo Bras wrote:
>
> > For every TX byte an IRQ is requested.
> > On threaded IRQs, the handler calls serial8250_tx_chars can send multiple
> > bytes, limited to it's queue size (tx_loadsz).
>
> Perhaps I'm missing something here but I don't understand what this tries
> to say.
>
> - 8250 driver gets TX empty IRQ
> - We write x bytes to FIFO
> - UART blasts those bits to wire, eventually emptying FIFO
> - We get the next TX empty IRQ
>
> What in this makes "for every TX byte an IRQ is requested" true? There's
> one IRQ only for every x bytes TX'ed as far as I can tell!?!
>
Context:
I created a C program for writting data to the serial by opening /dev/ttyS0
and fprintf() strings of about 100bytes to it. This fprintf() runs alone in
a for loop. This is compiled with GCC.
I noticed that it will create an IRQ for every char written to ttyS0.
Maybe I missed something, and this is not a rule, but that's what I
perceived as an average.
My scenario:
- Linux compiled with force_irqthreads = true
- serial8250 used as a serial terminal (emulated by qemu)
For non-irqthreads it works just fine.
But in this (force_irqthreads = true) scenario the IRQ will get triggered a
lot of times, and the threaded handler runs only sometimes, which makes it
easy for the IRQs to be handled in batch, which is fine.
The issue: this causes irqs_unhandled to be incremented once every time
note_interrupt() did not perceive a change in threads_handled. Since the
threads_handled increments in batches, it means many of those IRQs will be
considered "unhandled", leading to the serial8250 IRQ to be disabled.
I created a way (patches 2 & 3) to account for how many IRQ requests have
actually been handled by a handler, if that handler can deal with them in
batches.
This patch is about me trying to make serial8250 report how many IRQs it
handled, by using the (possibly incorrect) information that it will cause
1 IRQ per tx-byte.
This 1 IRQ/tx-byte info was perceived by tracing the IRQ count and the
number of bytes sent. It was also reinforced by serial8250_tx_chars() which
seems to transmit 1 byte at a time, even though it repeats that up to FIFO
size (up->tx_loadsz) in that function.
If that proves to be not correct, I will need to find a way of tracking
the number of IRQs handled in that scenario, so the IRQ disabling thing can
be avoided.
Does it make sense?
Thanks!
Leo
> --
> i.
>
> > When this happens, the handler return IRQ_HANDLED with reduces the
> > unhandled IRQ counter only by 1, even though many requests have been
> > handled at once.
> >
> > This causes the unhandled IRQ counter to go up until it reaches the maximum
> > and causes the registered IRQ to be disabled, thus breaking the serial
> > console.
> >
> > Make use of the newly introduced IRQ_HANDLED_MANY interface to return the
> > number of requests handled, so the unhandled IRQ counter can get decreased
> > accordingly.
> >
> > Signed-off-by: Leonardo Bras <[email protected]>
> > ---
> > include/linux/serial_8250.h | 2 +-
> > drivers/tty/serial/8250/8250_core.c | 13 ++++++++-----
> > drivers/tty/serial/8250/8250_port.c | 16 ++++++++++------
> > 3 files changed, 19 insertions(+), 12 deletions(-)
> >
> > diff --git a/include/linux/serial_8250.h b/include/linux/serial_8250.h
> > index ec46e3b49ee99..c9d4271b71d70 100644
> > --- a/include/linux/serial_8250.h
> > +++ b/include/linux/serial_8250.h
> > @@ -200,7 +200,7 @@ int fsl8250_handle_irq(struct uart_port *port);
> > int serial8250_handle_irq(struct uart_port *port, unsigned int iir);
> > u16 serial8250_rx_chars(struct uart_8250_port *up, u16 lsr);
> > void serial8250_read_char(struct uart_8250_port *up, u16 lsr);
> > -void serial8250_tx_chars(struct uart_8250_port *up);
> > +int serial8250_tx_chars(struct uart_8250_port *up);
> > unsigned int serial8250_modem_status(struct uart_8250_port *up);
> > void serial8250_init_port(struct uart_8250_port *up);
> > void serial8250_set_defaults(struct uart_8250_port *up);
> > diff --git a/drivers/tty/serial/8250/8250_core.c b/drivers/tty/serial/8250/8250_core.c
> > index ae637155fe7cd..2fab9102eec45 100644
> > --- a/drivers/tty/serial/8250/8250_core.c
> > +++ b/drivers/tty/serial/8250/8250_core.c
> > @@ -110,7 +110,7 @@ static irqreturn_t serial8250_interrupt(int irq, void *dev_id)
> > {
> > struct irq_info *i = dev_id;
> > struct list_head *l, *end = NULL;
> > - int pass_counter = 0, handled = 0;
> > + int pass_counter = 0, handled_total = 0;
> >
> > pr_debug("%s(%d): start\n", __func__, irq);
> >
> > @@ -120,15 +120,18 @@ static irqreturn_t serial8250_interrupt(int irq, void *dev_id)
> > do {
> > struct uart_8250_port *up;
> > struct uart_port *port;
> > + int handled;
> >
> > up = list_entry(l, struct uart_8250_port, list);
> > port = &up->port;
> >
> > - if (port->handle_irq(port)) {
> > - handled = 1;
> > + handled = port->handle_irq(port);
> > + if (handled) {
> > + handled_total += handled;
> > end = NULL;
> > - } else if (end == NULL)
> > + } else if (end == NULL) {
> > end = l;
> > + }
> >
> > l = l->next;
> >
> > @@ -140,7 +143,7 @@ static irqreturn_t serial8250_interrupt(int irq, void *dev_id)
> >
> > pr_debug("%s(%d): end\n", __func__, irq);
> >
> > - return IRQ_RETVAL(handled);
> > + return IRQ_RETVAL_MANY(handled_total);
> > }
> >
> > /*
> > diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
> > index f799c34f1603c..74d53507a73d4 100644
> > --- a/drivers/tty/serial/8250/8250_port.c
> > +++ b/drivers/tty/serial/8250/8250_port.c
> > @@ -1802,7 +1802,7 @@ u16 serial8250_rx_chars(struct uart_8250_port *up, u16 lsr)
> > }
> > EXPORT_SYMBOL_GPL(serial8250_rx_chars);
> >
> > -void serial8250_tx_chars(struct uart_8250_port *up)
> > +int serial8250_tx_chars(struct uart_8250_port *up)
> > {
> > struct uart_port *port = &up->port;
> > struct circ_buf *xmit = &port->state->xmit;
> > @@ -1810,15 +1810,15 @@ void serial8250_tx_chars(struct uart_8250_port *up)
> >
> > if (port->x_char) {
> > uart_xchar_out(port, UART_TX);
> > - return;
> > + return 0;
> > }
> > if (uart_tx_stopped(port)) {
> > serial8250_stop_tx(port);
> > - return;
> > + return 0;
> > }
> > if (uart_circ_empty(xmit)) {
> > __stop_tx(up);
> > - return;
> > + return 0;
> > }
> >
> > count = up->tx_loadsz;
> > @@ -1858,6 +1858,9 @@ void serial8250_tx_chars(struct uart_8250_port *up)
> > */
> > if (uart_circ_empty(xmit) && !(up->capabilities & UART_CAP_RPM))
> > __stop_tx(up);
> > +
> > + /* Return number of chars sent */
> > + return up->tx_loadsz - count;
> > }
> > EXPORT_SYMBOL_GPL(serial8250_tx_chars);
> >
> > @@ -1923,6 +1926,7 @@ int serial8250_handle_irq(struct uart_port *port, unsigned int iir)
> > bool skip_rx = false;
> > unsigned long flags;
> > u16 status;
> > + int handled = 0;
> >
> > if (iir & UART_IIR_NO_INT)
> > return 0;
> > @@ -1956,14 +1960,14 @@ int serial8250_handle_irq(struct uart_port *port, unsigned int iir)
> > serial8250_modem_status(up);
> > if ((status & UART_LSR_THRE) && (up->ier & UART_IER_THRI)) {
> > if (!up->dma || up->dma->tx_err)
> > - serial8250_tx_chars(up);
> > + handled = serial8250_tx_chars(up);
> > else if (!up->dma->tx_running)
> > __stop_tx(up);
> > }
> >
> > uart_unlock_and_check_sysrq_irqrestore(port, flags);
> >
> > - return 1;
> > + return handled ? : 1;
> > }
> > EXPORT_SYMBOL_GPL(serial8250_handle_irq);
> >
> >
>
On Fri, Feb 16, 2024 at 05:36:37PM +0200, Andy Shevchenko wrote:
> On Fri, Feb 16, 2024 at 04:59:44AM -0300, Leonardo Bras wrote:
> > Currently note_interrupt() will check threads_handled for changes and use
> > it to mark an IRQ as handled, in order to avoid having a threaded
> > interrupt to falsely trigger unhandled IRQ detection.
> >
> > This detection can still be falsely triggered if we have many IRQ handled
> > accounted between each check of threads_handled, as those will be counted
> > as a single one in the unhandled IRQ detection.
> >
> > In order to fix this, subtract from irqs_unhandled the number of IRQs
> > handled since the last check (threads_handled_last - threads_handled).
>
> ...
>
> > +static inline int get_handled_diff(struct irq_desc *desc)
> > +{
> > + unsigned int handled;
> > + int diff;
> > +
> > + handled = (unsigned int)atomic_read(&desc->threads_handled);
> > + handled |= SPURIOUS_DEFERRED;
> > +
> > + diff = handled - desc->threads_handled_last;
> > + diff >>= SPURIOUS_DEFERRED_SHIFT;
> > +
> > + /*
> > + * Note: We keep the SPURIOUS_DEFERRED bit set. We are handling the
> > + * previous invocation right now. Keep it for the current one, so the
> > + * next hardware interrupt will account for it.
> > + */
>
Hello Andy, thanks for reviewing!
> > + if (diff != 0)
>
> if (diff)
Sure
>
> > + desc->threads_handled_last = handled;
> > +
> > + return diff;
> > +}
>
> ...
>
> > + diff = get_handled_diff(desc);
> > + if (diff > 0) {
>
> diff may not be negative as you always right shift by 1 (or more) bit.
Agree
> Hence
>
> if (diff)
>
> will suffice (also be aligned with the similar check inside the helper) and
> making the helper to return unsigned value will be clearer. Am I correct?
Sure, you are correct.
I just think having it be (diff > 0) makes it clear that we only do the
subtraction if diff is bigger than zero, while (diff) could mean diff being
valid, and would require the reader to go back in code to see that diff is
an int.
Does it make sense?
Other than that, I agree the negative half of diff is never going to get
used, and it's better to go with unsigned int in both cases.
That will be changed on the next version.
Thanks!
Leo
>
> --
> With Best Regards,
> Andy Shevchenko
>
>
On Fri, Feb 16 2024 at 04:59, Leonardo Bras wrote:
> In threaded IRQs, some irq handlers are able to handle many requests at a
> single run, but this is only accounted as a single IRQ_HANDLED when
> increasing threads_handled.
>
> In order to fix this, introduce IRQ_HANDLED_MANY, so the returned value of
> those IRQ handlers are able to signal that many IRQs got handled at that
> run.
>
> Is scenarios where there is no need to keep track of IRQ handled, convert
> it back to IRQ_HANDLED.
That's not really workable as you'd have to update tons of drivers just
to deal with that corner case. That's error prone and just extra
complexity all over the place.
This really needs to be solved in the core code.
Thanks,
tglx
On Mon, Feb 19 2024 at 10:59, Thomas Gleixner wrote:
> On Fri, Feb 16 2024 at 04:59, Leonardo Bras wrote:
>
>> In threaded IRQs, some irq handlers are able to handle many requests at a
>> single run, but this is only accounted as a single IRQ_HANDLED when
>> increasing threads_handled.
>>
>> In order to fix this, introduce IRQ_HANDLED_MANY, so the returned value of
>> those IRQ handlers are able to signal that many IRQs got handled at that
>> run.
>>
>> Is scenarios where there is no need to keep track of IRQ handled, convert
>> it back to IRQ_HANDLED.
>
> That's not really workable as you'd have to update tons of drivers just
> to deal with that corner case. That's error prone and just extra
> complexity all over the place.
>
> This really needs to be solved in the core code.
Something like the uncompiled below should do the trick.
Thanks,
tglx
---
--- a/include/linux/irqdesc.h
+++ b/include/linux/irqdesc.h
@@ -38,7 +38,8 @@ struct pt_regs;
* @affinity_notify: context for notification of affinity changes
* @pending_mask: pending rebalanced interrupts
* @threads_oneshot: bitfield to handle shared oneshot threads
- * @threads_active: number of irqaction threads currently running
+ * @threads_active: number of irqaction threads currently activated
+ * @threads_running: number of irqaction threads currently running
* @wait_for_threads: wait queue for sync_irq to wait for threaded handlers
* @nr_actions: number of installed actions on this descriptor
* @no_suspend_depth: number of irqactions on a irq descriptor with
@@ -80,6 +81,7 @@ struct irq_desc {
#endif
unsigned long threads_oneshot;
atomic_t threads_active;
+ atomic_t threads_running;
wait_queue_head_t wait_for_threads;
#ifdef CONFIG_PM_SLEEP
unsigned int nr_actions;
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -1194,9 +1194,11 @@ irq_forced_thread_fn(struct irq_desc *de
local_bh_disable();
if (!IS_ENABLED(CONFIG_PREEMPT_RT))
local_irq_disable();
+ atomic_inc(&desc->threads_running);
ret = action->thread_fn(action->irq, action->dev_id);
if (ret == IRQ_HANDLED)
atomic_inc(&desc->threads_handled);
+ atomic_dec(&desc->threads_running);
irq_finalize_oneshot(desc, action);
if (!IS_ENABLED(CONFIG_PREEMPT_RT))
--- a/kernel/irq/spurious.c
+++ b/kernel/irq/spurious.c
@@ -350,6 +350,12 @@ void note_interrupt(struct irq_desc *des
desc->threads_handled_last = handled;
} else {
/*
+ * Avoid false positives when there is
+ * actually a thread running.
+ */
+ if (atomic_read(&desc->threads_running))
+ return;
+ /*
* None of the threaded handlers felt
* responsible for the last interrupt
*
On Mon, Feb 19, 2024 at 12:03:07PM +0100, Thomas Gleixner wrote:
> On Mon, Feb 19 2024 at 10:59, Thomas Gleixner wrote:
Hi Thomas, thanks for reviewing!
> > On Fri, Feb 16 2024 at 04:59, Leonardo Bras wrote:
> >
> >> In threaded IRQs, some irq handlers are able to handle many requests at a
> >> single run, but this is only accounted as a single IRQ_HANDLED when
> >> increasing threads_handled.
> >>
> >> In order to fix this, introduce IRQ_HANDLED_MANY, so the returned value of
> >> those IRQ handlers are able to signal that many IRQs got handled at that
> >> run.
> >>
> >> Is scenarios where there is no need to keep track of IRQ handled, convert
> >> it back to IRQ_HANDLED.
> >
> > That's not really workable as you'd have to update tons of drivers just
> > to deal with that corner case. That's error prone and just extra
> > complexity all over the place.
I agree, that's a downside of this implementation.
> >
> > This really needs to be solved in the core code.
>
> Something like the uncompiled below should do the trick.
>
> Thanks,
>
> tglx
> ---
> --- a/include/linux/irqdesc.h
> +++ b/include/linux/irqdesc.h
> @@ -38,7 +38,8 @@ struct pt_regs;
> * @affinity_notify: context for notification of affinity changes
> * @pending_mask: pending rebalanced interrupts
> * @threads_oneshot: bitfield to handle shared oneshot threads
> - * @threads_active: number of irqaction threads currently running
> + * @threads_active: number of irqaction threads currently activated
> + * @threads_running: number of irqaction threads currently running
> * @wait_for_threads: wait queue for sync_irq to wait for threaded handlers
> * @nr_actions: number of installed actions on this descriptor
> * @no_suspend_depth: number of irqactions on a irq descriptor with
> @@ -80,6 +81,7 @@ struct irq_desc {
> #endif
> unsigned long threads_oneshot;
> atomic_t threads_active;
> + atomic_t threads_running;
> wait_queue_head_t wait_for_threads;
> #ifdef CONFIG_PM_SLEEP
> unsigned int nr_actions;
> --- a/kernel/irq/manage.c
> +++ b/kernel/irq/manage.c
> @@ -1194,9 +1194,11 @@ irq_forced_thread_fn(struct irq_desc *de
> local_bh_disable();
> if (!IS_ENABLED(CONFIG_PREEMPT_RT))
> local_irq_disable();
> + atomic_inc(&desc->threads_running);
> ret = action->thread_fn(action->irq, action->dev_id);
> if (ret == IRQ_HANDLED)
> atomic_inc(&desc->threads_handled);
> + atomic_dec(&desc->threads_running);
>
> irq_finalize_oneshot(desc, action);
> if (!IS_ENABLED(CONFIG_PREEMPT_RT))
> --- a/kernel/irq/spurious.c
> +++ b/kernel/irq/spurious.c
> @@ -350,6 +350,12 @@ void note_interrupt(struct irq_desc *des
> desc->threads_handled_last = handled;
> } else {
> /*
> + * Avoid false positives when there is
> + * actually a thread running.
> + */
> + if (atomic_read(&desc->threads_running))
> + return;
> + /*
> * None of the threaded handlers felt
> * responsible for the last interrupt
> *
>
I agree the above may be able to solve the issue, but it would make 2 extra
atomic ops necessary in the thread handling the IRQ, as well as one extra
atomic operation in note_interrupt(), which could increase latency on this
IRQ deferring the handler to a thread.
I mean, yes, the cpu running note_interrupt() would probably already have
exclusiveness for this cacheline, but it further increases cacheline
bouncing and also adds the mem barriers that incur on atomic operations,
even if we use an extra bit from threads_handled instead of allocate a new
field for threads_running.
On top of that, let's think on a scenario where the threaded handler will
solve a lot of requests, but not necessarily spend a lot of time doing so.
This allows the thread to run for little time while solving a lot of
requests.
In this scenario, note_interrupt() could return without incrementing
irqs_unhandled for those IRQ that happen while the brief thread is running,
but every other IRQ would cause note_interrupt() to increase
irqs_unhandled, which would cause the bug to still reproduce.
I understand my suggested change increases irq_return complexity, but it
should not increase much of the overhead in both IRQ deferring and IRQ
thread handling. Also, since it accounts for every IRQ actually handled, it
does not matter how long the handler thread runs, it still avoids the bug.
As you previously noted, the main issue in my suggestion is about changing
drivers' code. But while this change is optional, I wonder how many
drivers handle IRQs that:
- use edge type interrupts, and
- can trigger really fast, many many times, and
- can run in force-thread mode, and
- have handlers that are able to deal with multiple IRQs per call.
If there aren't many that met this requirement, I could try to tackle them
as they become a problem.
About solving it directly in generic code, I agree it would be the ideal
scenario. That's why I suggested that in RFCv1: if the thread handles a
single IRQ, we reset irqs_unhandled to zero. That's a good pointer on the
thread being stuck, and TBH I don't think this is too far away from the
current 100/100k if we consider some of those handlers can handle multiple
IRQs at once.
What are your thoughts on that?
Thanks!
Leo
On Wed, Feb 21 2024 at 02:39, Leonardo Bras wrote:
> On Mon, Feb 19, 2024 at 12:03:07PM +0100, Thomas Gleixner wrote:
>> >> Is scenarios where there is no need to keep track of IRQ handled, convert
>> >> it back to IRQ_HANDLED.
>> >
>> > That's not really workable as you'd have to update tons of drivers just
>> > to deal with that corner case. That's error prone and just extra
>> > complexity all over the place.
>
> I agree, that's a downside of this implementation.
A serious one which is not really workable. See below.
> I agree the above may be able to solve the issue, but it would make 2 extra
> atomic ops necessary in the thread handling the IRQ, as well as one extra
> atomic operation in note_interrupt(), which could increase latency on this
> IRQ deferring the handler to a thread.
>
> I mean, yes, the cpu running note_interrupt() would probably already have
> exclusiveness for this cacheline, but it further increases cacheline
> bouncing and also adds the mem barriers that incur on atomic operations,
> even if we use an extra bit from threads_handled instead of allocate a new
> field for threads_running.
I think that's a strawman. Atomic operations can of course be more
expensive than non-atomic ones, but they only start to make a difference
when the cache line is contended. That's not the case here for the
normal operations.
Interrupts and their threads are strictly targeted to a single CPU and
the cache line is already hot and had to be made exclusive because of
other write operations to it.
There is usually no concurrency at all, except for administrative
operations like enable/disable or affinity changes. Those administrative
operations are not high frequency and the resulting cache line bouncing
is unavoidable even without that change. But does it matter in the
larger picture? I don't think so.
> On top of that, let's think on a scenario where the threaded handler will
> solve a lot of requests, but not necessarily spend a lot of time doing so.
> This allows the thread to run for little time while solving a lot of
> requests.
>
> In this scenario, note_interrupt() could return without incrementing
> irqs_unhandled for those IRQ that happen while the brief thread is running,
> but every other IRQ would cause note_interrupt() to increase
> irqs_unhandled, which would cause the bug to still reproduce.
In theory yes. Does it happen in practice?
But that exposes a flaw in the actual detection code. The code is
unconditionally accumulating if there is an unhandled interrupt within
100ms after the last unhandled one. IOW, if there is a periodic
unhandled one every 50ms, the interrupt will be shut down after 100000 *
50ms = 5000s ~= 83.3m ~= 1.4h. And it neither cares about the number of
actually handled interrupts.
The spurious detector is really about runaway interrupts which hog a CPU
completely, but the above is not what we want to protect against.
> I understand my suggested change increases irq_return complexity, but it
> should not increase much of the overhead in both IRQ deferring and IRQ
> thread handling. Also, since it accounts for every IRQ actually handled, it
> does not matter how long the handler thread runs, it still avoids the
> bug.
I'm not worried about the extra complexity in note_interrupt(). That's
fine and I don't have a strong opinion whether using your patch 2/4 or
the simpler one I suggested.
> As you previously noted, the main issue in my suggestion is about changing
> drivers' code. But while this change is optional, I wonder how many
> drivers handle IRQs that:
> - use edge type interrupts, and
It's not up to the driver to decide that. That's a platform property and
the driver does not even know and they should not care because all what
matters for the driver is that it gets the interrupts and does not lose
anything.
> - can trigger really fast, many many times, and
That's a hardware or emulation property and I don't know how many
devices would expose this behaviour.
> - can run in force-thread mode, and
Almost all drivers.
> - have handlers that are able to deal with multiple IRQs per call.
Pretty much every driver which has to deal with queues, FIFOs
etc. That's a lot of them.
> About solving it directly in generic code, I agree it would be the ideal
> scenario. That's why I suggested that in RFCv1: if the thread handles a
> single IRQ, we reset irqs_unhandled to zero. That's a good pointer on the
> thread being stuck, and TBH I don't think this is too far away from the
> current 100/100k if we consider some of those handlers can handle multiple
> IRQs at once.
It has the downside that a scenario where there is a spurious interrupt
flood with an occasional one inbetween which is handled by the thread
will not be detected anymore. The accumulation and time period tracking
are there for a reason.
But as I pointed out above the detection logic is flawed due to the
unconditional accumulation. Can you give the uncompiled below a test
ride with your scenario?
Thanks,
tglx
---
include/linux/irqdesc.h | 2 ++
kernel/irq/spurious.c | 33 +++++++++++++++++++++++++++++----
2 files changed, 31 insertions(+), 4 deletions(-)
--- a/include/linux/irqdesc.h
+++ b/include/linux/irqdesc.h
@@ -29,6 +29,7 @@ struct pt_regs;
* @wake_depth: enable depth, for multiple irq_set_irq_wake() callers
* @tot_count: stats field for non-percpu irqs
* @irq_count: stats field to detect stalled irqs
+ * @first_unhandled: Timestamp of the first unhandled interrupt
* @last_unhandled: aging timer for unhandled count
* @irqs_unhandled: stats field for spurious unhandled interrupts
* @threads_handled: stats field for deferred spurious detection of threaded handlers
@@ -64,6 +65,7 @@ struct irq_desc {
unsigned int wake_depth; /* nested wake enables */
unsigned int tot_count;
unsigned int irq_count; /* For detecting broken IRQs */
+ unsigned long first_unhandled;
unsigned long last_unhandled; /* Aging timer for unhandled count */
unsigned int irqs_unhandled;
atomic_t threads_handled;
--- a/kernel/irq/spurious.c
+++ b/kernel/irq/spurious.c
@@ -390,9 +390,14 @@ void note_interrupt(struct irq_desc *des
* working systems
*/
if (time_after(jiffies, desc->last_unhandled + HZ/10))
- desc->irqs_unhandled = 1;
- else
- desc->irqs_unhandled++;
+ desc->irqs_unhandled = 0;
+
+ if (!desc->irqs_unhandled) {
+ desc->irq_count = 0;
+ desc->first_unhandled = jiffies;
+ }
+
+ desc->irqs_unhandled++;
desc->last_unhandled = jiffies;
}
@@ -411,9 +416,27 @@ void note_interrupt(struct irq_desc *des
if (likely(desc->irq_count < 100000))
return;
- desc->irq_count = 0;
if (unlikely(desc->irqs_unhandled > 99900)) {
/*
+ * Validate that this is actually an interrupt storm, which
+ * happens to:
+ *
+ * - increment the unhandled count to ~100k within 10 seconds
+ * which means there is an spurious interrupt every 50us
+ * - have an unhandled to handled ratio > 2
+ *
+ * Otherwise reset the detector and start over. This also
+ * covers the case where a threaded handler consumes
+ * several hard interrupts in one go which would otherwise
+ * accumulate to 99900 over time and trigger a false positive.
+ */
+ unsigned long td = desc->last_unhandled - desc->first_unhandled;
+ unsigned int handled = desc->irq_count - desc->irqs_unhandled;
+
+ if (td > 5 * HZ || desc->irqs_unhandled / handled < 3)
+ goto out;
+
+ /*
* The interrupt is stuck
*/
__report_bad_irq(desc, action_ret);
@@ -428,7 +451,9 @@ void note_interrupt(struct irq_desc *des
mod_timer(&poll_spurious_irq_timer,
jiffies + POLL_SPURIOUS_IRQ_INTERVAL);
}
+out:
desc->irqs_unhandled = 0;
+ desc->irq_count = 0;
}
bool noirqdebug __read_mostly;
On Wed, Feb 21 2024 at 16:41, Thomas Gleixner wrote:
> On Wed, Feb 21 2024 at 02:39, Leonardo Bras wrote:
> But as I pointed out above the detection logic is flawed due to the
> unconditional accumulation. Can you give the uncompiled below a test
> ride with your scenario?
Bah. Ignore this. I misread the code completely. No idea where my brain
was.
This thing triggers only when there are 100K interrupts and 99.9k of
them unhandled. The 100k total resets the unhandled counts.
Though one thing which strikes me odd is that this actually triggers at
all because it needs 99.9k unhandled out of 100k total. That means on
average every thread handler invocation handles 1000 hardware interrupts
in one go. Is that even realistic?
Thanks,
tglx
On Wed, Feb 21, 2024 at 04:41:20PM +0100, Thomas Gleixner wrote:
> On Wed, Feb 21 2024 at 02:39, Leonardo Bras wrote:
> > On Mon, Feb 19, 2024 at 12:03:07PM +0100, Thomas Gleixner wrote:
> >> >> Is scenarios where there is no need to keep track of IRQ handled, convert
> >> >> it back to IRQ_HANDLED.
> >> >
> >> > That's not really workable as you'd have to update tons of drivers just
> >> > to deal with that corner case. That's error prone and just extra
> >> > complexity all over the place.
> >
> > I agree, that's a downside of this implementation.
>
> A serious one which is not really workable. See below.
>
> > I agree the above may be able to solve the issue, but it would make 2 extra
> > atomic ops necessary in the thread handling the IRQ, as well as one extra
> > atomic operation in note_interrupt(), which could increase latency on this
> > IRQ deferring the handler to a thread.
> >
> > I mean, yes, the cpu running note_interrupt() would probably already have
> > exclusiveness for this cacheline, but it further increases cacheline
> > bouncing and also adds the mem barriers that incur on atomic operations,
> > even if we use an extra bit from threads_handled instead of allocate a new
> > field for threads_running.
>
> I think that's a strawman. Atomic operations can of course be more
> expensive than non-atomic ones, but they only start to make a difference
> when the cache line is contended. That's not the case here for the
> normal operations.
>
> Interrupts and their threads are strictly targeted to a single CPU and
> the cache line is already hot and had to be made exclusive because of
> other write operations to it.
>
> There is usually no concurrency at all, except for administrative
> operations like enable/disable or affinity changes. Those administrative
> operations are not high frequency and the resulting cache line bouncing
> is unavoidable even without that change. But does it matter in the
> larger picture? I don't think so.
That's a fair point, but there are some use cases that use CPU Isolation on
top of PREEMPT_RT in order to reduce interference on a CPU running an RT
workload.
For those cases, IIRC the handler will run on a different (housekeeping)
CPU when those IRQs originate on an Isolated CPU, meaning the above
described cacheline bouncing is expected.
>
> > On top of that, let's think on a scenario where the threaded handler will
> > solve a lot of requests, but not necessarily spend a lot of time doing so.
> > This allows the thread to run for little time while solving a lot of
> > requests.
> >
> > In this scenario, note_interrupt() could return without incrementing
> > irqs_unhandled for those IRQ that happen while the brief thread is running,
> > but every other IRQ would cause note_interrupt() to increase
> > irqs_unhandled, which would cause the bug to still reproduce.
>
> In theory yes. Does it happen in practice?
>
> But that exposes a flaw in the actual detection code. The code is
> unconditionally accumulating if there is an unhandled interrupt within
> 100ms after the last unhandled one. IOW, if there is a periodic
> unhandled one every 50ms, the interrupt will be shut down after 100000 *
> 50ms = 5000s ~= 83.3m ~= 1.4h. And it neither cares about the number of
> actually handled interrupts.
>
> The spurious detector is really about runaway interrupts which hog a CPU
> completely, but the above is not what we want to protect against.
Now it makes a lot more sense to me.
Thanks!
>
> > I understand my suggested change increases irq_return complexity, but it
> > should not increase much of the overhead in both IRQ deferring and IRQ
> > thread handling. Also, since it accounts for every IRQ actually handled, it
> > does not matter how long the handler thread runs, it still avoids the
> > bug.
>
> I'm not worried about the extra complexity in note_interrupt(). That's
> fine and I don't have a strong opinion whether using your patch 2/4 or
> the simpler one I suggested.
>
> > As you previously noted, the main issue in my suggestion is about changing
> > drivers' code. But while this change is optional, I wonder how many
> > drivers handle IRQs that:
> > - use edge type interrupts, and
>
> It's not up to the driver to decide that. That's a platform property and
> the driver does not even know and they should not care because all what
> matters for the driver is that it gets the interrupts and does not lose
> anything.
>
> > - can trigger really fast, many many times, and
>
> That's a hardware or emulation property and I don't know how many
> devices would expose this behaviour.
>
> > - can run in force-thread mode, and
>
> Almost all drivers.
>
> > - have handlers that are able to deal with multiple IRQs per call.
>
> Pretty much every driver which has to deal with queues, FIFOs
> etc. That's a lot of them.
>
> > About solving it directly in generic code, I agree it would be the ideal
> > scenario. That's why I suggested that in RFCv1: if the thread handles a
> > single IRQ, we reset irqs_unhandled to zero. That's a good pointer on the
> > thread being stuck, and TBH I don't think this is too far away from the
> > current 100/100k if we consider some of those handlers can handle multiple
> > IRQs at once.
>
> It has the downside that a scenario where there is a spurious interrupt
> flood with an occasional one inbetween which is handled by the thread
> will not be detected anymore. The accumulation and time period tracking
> are there for a reason.
Ok, fair point.
But if we disable it, we end up not having even those few successes.
But now that I understand the point is preventing the CPU from hogging, it
makes sense: we prefer disabling the interrupt than compomising the system.
It also makes a warning that may help to track down the driver/device
issue.
>
> But as I pointed out above the detection logic is flawed due to the
> unconditional accumulation. Can you give the uncompiled below a test
> ride with your scenario?
>
> Thanks,
>
> tglx
> ---
> include/linux/irqdesc.h | 2 ++
> kernel/irq/spurious.c | 33 +++++++++++++++++++++++++++++----
> 2 files changed, 31 insertions(+), 4 deletions(-)
>
> --- a/include/linux/irqdesc.h
> +++ b/include/linux/irqdesc.h
> @@ -29,6 +29,7 @@ struct pt_regs;
> * @wake_depth: enable depth, for multiple irq_set_irq_wake() callers
> * @tot_count: stats field for non-percpu irqs
> * @irq_count: stats field to detect stalled irqs
> + * @first_unhandled: Timestamp of the first unhandled interrupt
> * @last_unhandled: aging timer for unhandled count
> * @irqs_unhandled: stats field for spurious unhandled interrupts
> * @threads_handled: stats field for deferred spurious detection of threaded handlers
> @@ -64,6 +65,7 @@ struct irq_desc {
> unsigned int wake_depth; /* nested wake enables */
> unsigned int tot_count;
> unsigned int irq_count; /* For detecting broken IRQs */
> + unsigned long first_unhandled;
> unsigned long last_unhandled; /* Aging timer for unhandled count */
> unsigned int irqs_unhandled;
> atomic_t threads_handled;
> --- a/kernel/irq/spurious.c
> +++ b/kernel/irq/spurious.c
> @@ -390,9 +390,14 @@ void note_interrupt(struct irq_desc *des
> * working systems
> */
> if (time_after(jiffies, desc->last_unhandled + HZ/10))
> - desc->irqs_unhandled = 1;
> - else
> - desc->irqs_unhandled++;
> + desc->irqs_unhandled = 0;
> +
> + if (!desc->irqs_unhandled) {
> + desc->irq_count = 0;
> + desc->first_unhandled = jiffies;
> + }
> +
> + desc->irqs_unhandled++;
> desc->last_unhandled = jiffies;
> }
>
> @@ -411,9 +416,27 @@ void note_interrupt(struct irq_desc *des
> if (likely(desc->irq_count < 100000))
> return;
>
> - desc->irq_count = 0;
> if (unlikely(desc->irqs_unhandled > 99900)) {
> /*
> + * Validate that this is actually an interrupt storm, which
> + * happens to:
> + *
> + * - increment the unhandled count to ~100k within 10 seconds
> + * which means there is an spurious interrupt every 50us
> + * - have an unhandled to handled ratio > 2
> + *
> + * Otherwise reset the detector and start over. This also
> + * covers the case where a threaded handler consumes
> + * several hard interrupts in one go which would otherwise
> + * accumulate to 99900 over time and trigger a false positive.
> + */
> + unsigned long td = desc->last_unhandled - desc->first_unhandled;
> + unsigned int handled = desc->irq_count - desc->irqs_unhandled;
> +
> + if (td > 5 * HZ || desc->irqs_unhandled / handled < 3)
> + goto out;
> +
> + /*
> * The interrupt is stuck
> */
> __report_bad_irq(desc, action_ret);
> @@ -428,7 +451,9 @@ void note_interrupt(struct irq_desc *des
> mod_timer(&poll_spurious_irq_timer,
> jiffies + POLL_SPURIOUS_IRQ_INTERVAL);
> }
> +out:
> desc->irqs_unhandled = 0;
> + desc->irq_count = 0;
> }
>
> bool noirqdebug __read_mostly;
>
Sure, I will give it a try.
Now having a better understanding of what is expected here, I will also try
to experiment a little here.
Thank you!
Leo
On Wed, Feb 21, 2024 at 06:04:21PM +0100, Thomas Gleixner wrote:
> On Wed, Feb 21 2024 at 16:41, Thomas Gleixner wrote:
> > On Wed, Feb 21 2024 at 02:39, Leonardo Bras wrote:
> > But as I pointed out above the detection logic is flawed due to the
> > unconditional accumulation. Can you give the uncompiled below a test
> > ride with your scenario?
>
> Bah. Ignore this. I misread the code completely. No idea where my brain
> was.
>
> This thing triggers only when there are 100K interrupts and 99.9k of
> them unhandled. The 100k total resets the unhandled counts.
>
> Though one thing which strikes me odd is that this actually triggers at
> all because it needs 99.9k unhandled out of 100k total. That means on
> average every thread handler invocation handles 1000 hardware interrupts
> in one go. Is that even realistic?
Yeap, it triggers pretty easily if you bring a vm with a serial console,
and try to use it to work with something very verbose.
It was detected by someone trying to unpack a kernel source tarball.
Maybe this is an issue that only becomes reproducible for this and maybe a
couple extra drivers, so the solution will only need to be implemented in
those drivers when (if) this bug reproduces.
This being said, thank you for helping me improve my understandig of this
piece of code. I will put some effort in trying to find a solution that
works by changing generic-code only, but would like to understand if the
current proposal is valid if I am unable to find any.
Thanks!
Leo
>
> Thanks,
>
> tglx
>
>
On Fri, Feb 23 2024 at 01:37, Leonardo Bras wrote:
> On Wed, Feb 21, 2024 at 04:41:20PM +0100, Thomas Gleixner wrote:
>> There is usually no concurrency at all, except for administrative
>> operations like enable/disable or affinity changes. Those administrative
>> operations are not high frequency and the resulting cache line bouncing
>> is unavoidable even without that change. But does it matter in the
>> larger picture? I don't think so.
>
> That's a fair point, but there are some use cases that use CPU Isolation on
> top of PREEMPT_RT in order to reduce interference on a CPU running an RT
> workload.
>
> For those cases, IIRC the handler will run on a different (housekeeping)
> CPU when those IRQs originate on an Isolated CPU, meaning the above
> described cacheline bouncing is expected.
No. The affinity of the interrupt and the thread are strictly coupled
and always on the same CPU except for one instance during migration, but
that's irrelevant.
Thanks,
tglx