Some drivers might access regmap in a context where a raw spinlock is
held. An example is drivers/irqchip/irq-ls-extirq.c, which calls
regmap_update_bits() from struct irq_chip :: irq_set_type, which is a
method called by __irq_set_trigger() under the desc->lock raw spin lock.
Since desc->lock is a raw spin lock and the regmap internal lock for
mmio is a plain spinlock (which can become sleepable on RT), this is an
invalid locking scheme and we get a splat stating that this is a
"[ BUG: Invalid wait context ]".
It seems reasonable for regmap to have an option use a raw spinlock too,
so add that in the config such that drivers can request it.
Suggested-by: Mark Brown <[email protected]>
Signed-off-by: Vladimir Oltean <[email protected]>
---
drivers/base/regmap/internal.h | 4 ++++
drivers/base/regmap/regmap.c | 35 +++++++++++++++++++++++++++++-----
include/linux/regmap.h | 2 ++
3 files changed, 36 insertions(+), 5 deletions(-)
diff --git a/drivers/base/regmap/internal.h b/drivers/base/regmap/internal.h
index 0097696c31de..b1905916f7af 100644
--- a/drivers/base/regmap/internal.h
+++ b/drivers/base/regmap/internal.h
@@ -53,6 +53,10 @@ struct regmap {
spinlock_t spinlock;
unsigned long spinlock_flags;
};
+ struct {
+ raw_spinlock_t raw_spinlock;
+ unsigned long raw_spinlock_flags;
+ };
};
regmap_lock lock;
regmap_unlock unlock;
diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
index fe3e38dd5324..e7ae477d2fcf 100644
--- a/drivers/base/regmap/regmap.c
+++ b/drivers/base/regmap/regmap.c
@@ -533,6 +533,23 @@ __releases(&map->spinlock)
spin_unlock_irqrestore(&map->spinlock, map->spinlock_flags);
}
+static void regmap_lock_raw_spinlock(void *__map)
+__acquires(&map->raw_spinlock)
+{
+ struct regmap *map = __map;
+ unsigned long flags;
+
+ raw_spin_lock_irqsave(&map->raw_spinlock, flags);
+ map->raw_spinlock_flags = flags;
+}
+
+static void regmap_unlock_raw_spinlock(void *__map)
+__releases(&map->raw_spinlock)
+{
+ struct regmap *map = __map;
+ raw_spin_unlock_irqrestore(&map->raw_spinlock, map->raw_spinlock_flags);
+}
+
static void dev_get_regmap_release(struct device *dev, void *res)
{
/*
@@ -770,11 +787,19 @@ struct regmap *__regmap_init(struct device *dev,
} else {
if ((bus && bus->fast_io) ||
config->fast_io) {
- spin_lock_init(&map->spinlock);
- map->lock = regmap_lock_spinlock;
- map->unlock = regmap_unlock_spinlock;
- lockdep_set_class_and_name(&map->spinlock,
- lock_key, lock_name);
+ if (config->use_raw_spinlock) {
+ raw_spin_lock_init(&map->raw_spinlock);
+ map->lock = regmap_lock_raw_spinlock;
+ map->unlock = regmap_unlock_raw_spinlock;
+ lockdep_set_class_and_name(&map->raw_spinlock,
+ lock_key, lock_name);
+ } else {
+ spin_lock_init(&map->spinlock);
+ map->lock = regmap_lock_spinlock;
+ map->unlock = regmap_unlock_spinlock;
+ lockdep_set_class_and_name(&map->spinlock,
+ lock_key, lock_name);
+ }
} else {
mutex_init(&map->mutex);
map->lock = regmap_lock_mutex;
diff --git a/include/linux/regmap.h b/include/linux/regmap.h
index f5f08dd0a116..7a3cda794501 100644
--- a/include/linux/regmap.h
+++ b/include/linux/regmap.h
@@ -344,6 +344,7 @@ typedef void (*regmap_unlock)(void *);
* @ranges: Array of configuration entries for virtual address ranges.
* @num_ranges: Number of range configuration entries.
* @use_hwlock: Indicate if a hardware spinlock should be used.
+ * @use_raw_spinlock: Indicate if a raw spinlock should be used.
* @hwlock_id: Specify the hardware spinlock id.
* @hwlock_mode: The hardware spinlock mode, should be HWLOCK_IRQSTATE,
* HWLOCK_IRQ or 0.
@@ -403,6 +404,7 @@ struct regmap_config {
unsigned int num_ranges;
bool use_hwlock;
+ bool use_raw_spinlock;
unsigned int hwlock_id;
unsigned int hwlock_mode;
--
2.25.1
On Wed, Aug 25 2021 at 23:50, Vladimir Oltean wrote:
> Some drivers might access regmap in a context where a raw spinlock is
> held. An example is drivers/irqchip/irq-ls-extirq.c, which calls
> regmap_update_bits() from struct irq_chip :: irq_set_type, which is a
> method called by __irq_set_trigger() under the desc->lock raw spin lock.
>
> Since desc->lock is a raw spin lock and the regmap internal lock for
> mmio is a plain spinlock (which can become sleepable on RT), this is an
> invalid locking scheme and we get a splat stating that this is a
> "[ BUG: Invalid wait context ]".
>
> It seems reasonable for regmap to have an option use a raw spinlock too,
> so add that in the config such that drivers can request it.
What's reasonable about that?
What exactly prevents the regmap locking to use a raw spinlock
unconditionally?
Even for the case where the regmap is not dealing with irq chips it does
not make any sense to protect low level operations on shared register
with a regular spinlock. I might be missing something though...
Thanks,
tglx
On Fri, Aug 27, 2021 at 01:01:35AM +0200, Thomas Gleixner wrote:
> On Wed, Aug 25 2021 at 23:50, Vladimir Oltean wrote:
> > It seems reasonable for regmap to have an option use a raw spinlock too,
> > so add that in the config such that drivers can request it.
>
> What's reasonable about that?
>
> What exactly prevents the regmap locking to use a raw spinlock
> unconditionally?
>
> Even for the case where the regmap is not dealing with irq chips it does
> not make any sense to protect low level operations on shared register
> with a regular spinlock. I might be missing something though...
Mark, any comments?
Generally it is said that misusing raw spinlocks has detrimential
performance upon the real-time aspects of the system, and I don't really
have a good feeling for what constitutes misuse vs what is truly justified
(in fact I did start the thread with "apologies for my novice level of
understanding").
On the other hand, while it does seem a bit too much overhead for
sequences of MMIO reads/writes to be able to be preempted, it doesn't
sound like it would break something either, so...
But I will say that I've tested that and it would solve both my problems
(the stack trace with ls-extirq and the fact that I would like to avoid
reworking the ls-extirq driver too much), as well as problems I never
knew I had: it turns out, armada_37xx_irq_set_type() uses regmap too
(and a sleepable spin lock too - irq_lock). The latter would still have
to be manually patched out.
On Fri, Aug 27 2021 at 16:12, Vladimir Oltean wrote:
> On Fri, Aug 27, 2021 at 01:01:35AM +0200, Thomas Gleixner wrote:
>> Even for the case where the regmap is not dealing with irq chips it does
>> not make any sense to protect low level operations on shared register
>> with a regular spinlock. I might be missing something though...
>
> Mark, any comments?
>
> Generally it is said that misusing raw spinlocks has detrimential
> performance upon the real-time aspects of the system, and I don't really
> have a good feeling for what constitutes misuse vs what is truly justified
> (in fact I did start the thread with "apologies for my novice level of
> understanding").
>
> On the other hand, while it does seem a bit too much overhead for
> sequences of MMIO reads/writes to be able to be preempted, it doesn't
> sound like it would break something either, so...
The question is how long those sequences are.
If it's just a pair or so then the raw spinlock protection has
definitely a smaller worst case than the sleeping spinlock in the
contended case.
OTOH, if regmap operations consist of several dozens of MMIO accesses,
then the preempt disabled region might be quite long.
I'm not familiar enough with regmaps to make a judgement here.
Thanks,
tglx
On Fri, Aug 27, 2021 at 09:59:56PM +0200, Thomas Gleixner wrote:
> On Fri, Aug 27 2021 at 16:12, Vladimir Oltean wrote:
> > On Fri, Aug 27, 2021 at 01:01:35AM +0200, Thomas Gleixner wrote:
> >> Even for the case where the regmap is not dealing with irq chips it does
> >> not make any sense to protect low level operations on shared register
> >> with a regular spinlock. I might be missing something though...
> >
> > Mark, any comments?
> >
> > Generally it is said that misusing raw spinlocks has detrimential
> > performance upon the real-time aspects of the system, and I don't really
> > have a good feeling for what constitutes misuse vs what is truly justified
> > (in fact I did start the thread with "apologies for my novice level of
> > understanding").
> >
> > On the other hand, while it does seem a bit too much overhead for
> > sequences of MMIO reads/writes to be able to be preempted, it doesn't
> > sound like it would break something either, so...
>
> The question is how long those sequences are.
>
> If it's just a pair or so then the raw spinlock protection has
> definitely a smaller worst case than the sleeping spinlock in the
> contended case.
>
> OTOH, if regmap operations consist of several dozens of MMIO accesses,
> then the preempt disabled region might be quite long.
>
> I'm not familiar enough with regmaps to make a judgement here.
I think "how long are the read/write regmap sequences" is outside of
regmap's control, but rather a matter of usage. This would point towards
the current solution, where users select whether to use raw spinlocks or
not, being the preferable one.
On 27/08/2021 01.01, Thomas Gleixner wrote:
> On Wed, Aug 25 2021 at 23:50, Vladimir Oltean wrote:
>
>> Some drivers might access regmap in a context where a raw spinlock is
>> held. An example is drivers/irqchip/irq-ls-extirq.c, which calls
>> regmap_update_bits() from struct irq_chip :: irq_set_type, which is a
>> method called by __irq_set_trigger() under the desc->lock raw spin lock.
>>
>> Since desc->lock is a raw spin lock and the regmap internal lock for
>> mmio is a plain spinlock (which can become sleepable on RT), this is an
>> invalid locking scheme and we get a splat stating that this is a
>> "[ BUG: Invalid wait context ]".
>>
>> It seems reasonable for regmap to have an option use a raw spinlock too,
>> so add that in the config such that drivers can request it.
>
> What's reasonable about that?
>
> What exactly prevents the regmap locking to use a raw spinlock
> unconditionally?
Perhaps this:
/*
* When we write in fast-paths with regmap_bulk_write() don't
allocate
* scratch buffers with sleeping allocations.
*/
if ((bus && bus->fast_io) || config->fast_io)
map->alloc_flags = GFP_ATOMIC;
else
map->alloc_flags = GFP_KERNEL;
i.e. the regmap code can actually do allocations under whatever internal
lock it uses. So ISTM that any regmap that uses a raw_spinlock (whether
unconditionally or via Vladimir's opt-in) cannot be used with
regmap_bulk_write().
ISTM using regmap for mmio makes things more complicated than necessary.
Rasmus
On Fri, Aug 27, 2021 at 01:01:35AM +0200, Thomas Gleixner wrote:
> On Wed, Aug 25 2021 at 23:50, Vladimir Oltean wrote:
> > It seems reasonable for regmap to have an option use a raw spinlock too,
> > so add that in the config such that drivers can request it.
> What's reasonable about that?
> What exactly prevents the regmap locking to use a raw spinlock
> unconditionally?
We definitely can't use a raw spinlock unconditionally since we
support register maps on devices connected via buses which can't
be accessed atomically so we need the option of doing mutexes.
> Even for the case where the regmap is not dealing with irq chips it does
> not make any sense to protect low level operations on shared register
> with a regular spinlock. I might be missing something though...
That probably does make sense, I think we're just using regular
spinlocks for spinlocks mainly because they're the default rather
than because anyone put huge amounts of thought into it. IIRC
the first users were using spinlocks for their locking when they
were converted.
On Mon, Aug 30, 2021 at 01:02:33PM +0200, Rasmus Villemoes wrote:
> i.e. the regmap code can actually do allocations under whatever internal
> lock it uses. So ISTM that any regmap that uses a raw_spinlock (whether
> unconditionally or via Vladimir's opt-in) cannot be used with
> regmap_bulk_write().
No, anything that's using a spinlock already needs to avoid any
allocations - ensuring that either there's no cache or that the
cache is fully initialized with defaults. The only non-cache
allocations that might be done are only used by buses that sleep
anyway.
On Mon, Aug 30 2021 at 13:19, Mark Brown wrote:
> On Fri, Aug 27, 2021 at 01:01:35AM +0200, Thomas Gleixner wrote:
>> On Wed, Aug 25 2021 at 23:50, Vladimir Oltean wrote:
>
>> > It seems reasonable for regmap to have an option use a raw spinlock too,
>> > so add that in the config such that drivers can request it.
>
>> What's reasonable about that?
>
>> What exactly prevents the regmap locking to use a raw spinlock
>> unconditionally?
>
> We definitely can't use a raw spinlock unconditionally since we
> support register maps on devices connected via buses which can't
> be accessed atomically so we need the option of doing mutexes.
The mutex part is fine. For slow busses this is obviously required.
>> Even for the case where the regmap is not dealing with irq chips it does
>> not make any sense to protect low level operations on shared register
>> with a regular spinlock. I might be missing something though...
>
> That probably does make sense, I think we're just using regular
> spinlocks for spinlocks mainly because they're the default rather
> than because anyone put huge amounts of thought into it. IIRC
> the first users were using spinlocks for their locking when they
> were converted.
So if the actual spinlock protected operations are not doing any other
business than accessing preallocated cache memory and a few MMIO
operations then converting them to raw spinlocks should have no real
impact on RT.
One way to do that is obviously starting with the patch from Vladimir
and then convert them one by one, so the assumption that they are not
doing anything nasty (in the RT sense) can be validated.
Thanks,
tglx
On Mon, Aug 30, 2021 at 04:16:04PM +0200, Thomas Gleixner wrote:
> On Mon, Aug 30 2021 at 13:19, Mark Brown wrote:
> > That probably does make sense, I think we're just using regular
> > spinlocks for spinlocks mainly because they're the default rather
> > than because anyone put huge amounts of thought into it. IIRC
> > the first users were using spinlocks for their locking when they
> > were converted.
> So if the actual spinlock protected operations are not doing any other
> business than accessing preallocated cache memory and a few MMIO
> operations then converting them to raw spinlocks should have no real
> impact on RT.
I think Vladimir's point that something might try to use one of the APIs
that can do multiple register writes atomically to generate a very long
register write sequence is valid here. It's far from the common case
but it'd be hard to audit, it's going to be a lot easier to handle going
to raw spinlocks in the cases where it's specifically needed than to
keep on top of ensuring that none of the users are causing issues or
start causing issues in the future. This does make me feel it's a bit
safer to leave the default the way it is since if you get it wrong then
lockdep will tend to notice very quickly while it's less likely that
we'd get tooling spotting issues the other way around.
> One way to do that is obviously starting with the patch from Vladimir
> and then convert them one by one, so the assumption that they are not
> doing anything nasty (in the RT sense) can be validated.
Vladimir's patch is in Linus' tree now so users that can safely do so
can start using raw spinlocks.
On Wed, Sep 01 2021 at 17:05, Mark Brown wrote:
> On Mon, Aug 30, 2021 at 04:16:04PM +0200, Thomas Gleixner wrote:
>> On Mon, Aug 30 2021 at 13:19, Mark Brown wrote:
>
>> > That probably does make sense, I think we're just using regular
>> > spinlocks for spinlocks mainly because they're the default rather
>> > than because anyone put huge amounts of thought into it. IIRC
>> > the first users were using spinlocks for their locking when they
>> > were converted.
>
>> So if the actual spinlock protected operations are not doing any other
>> business than accessing preallocated cache memory and a few MMIO
>> operations then converting them to raw spinlocks should have no real
>> impact on RT.
>
> I think Vladimir's point that something might try to use one of the APIs
> that can do multiple register writes atomically to generate a very long
> register write sequence is valid here. It's far from the common case
> but it'd be hard to audit, it's going to be a lot easier to handle going
> to raw spinlocks in the cases where it's specifically needed than to
> keep on top of ensuring that none of the users are causing issues or
> start causing issues in the future. This does make me feel it's a bit
> safer to leave the default the way it is since if you get it wrong then
> lockdep will tend to notice very quickly while it's less likely that
> we'd get tooling spotting issues the other way around.
Fair enough.
>> One way to do that is obviously starting with the patch from Vladimir
>> and then convert them one by one, so the assumption that they are not
>> doing anything nasty (in the RT sense) can be validated.
>
> Vladimir's patch is in Linus' tree now so users that can safely do so
> can start using raw spinlocks.
ok