Hi all,
I was investigating the source of abnormal irq-latency spikes on an i.MX6
(ARM) board, and discovered this:
# tracer: preemptirqsoff
#
# preemptirqsoff latency trace v1.1.5 on 4.4.0-rc4+
# --------------------------------------------------------------------
# latency: 2068 us, #4/4, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:1)
# -----------------
# | task: mmcqd/0-92 (uid:0 nice:0 policy:0 rt_prio:0)
# -----------------
# => started at: _raw_spin_lock_irqsave
# => ended at: _raw_spin_unlock_irqrestore
#
#
# _------=> CPU#
# / _-----=> irqs-off
# | / _----=> need-resched
# || / _---=> hardirq/softirq
# ||| / _--=> preempt-depth
# |||| / delay
# cmd pid ||||| time | caller
# \ / ||||| \ | /
mmcqd/0-92 0d... 1us#: _raw_spin_lock_irqsave
mmcqd/0-92 0.n.1 2066us : _raw_spin_unlock_irqrestore
mmcqd/0-92 0.n.1 2070us+: trace_preempt_on <-_raw_spin_unlock_irqrestore
mmcqd/0-92 0.n.1 2107us : <stack trace>
=> sdhci_runtime_resume_host
=> __rpm_callback
=> rpm_callback
=> rpm_resume
=> __pm_runtime_resume
=> __mmc_claim_host
=> mmc_blk_issue_rq
=> mmc_queue_thread
=> kthread
=> ret_from_fork
2 ms with interrupts disabled!!! To much dismay, I later discovered that this
isn't even the worst case scenario. I also discovered that this has been in
the kernel for a long time without a fix (I have tested from 3.17 to 4.4-rc4).
There has been an attempt by someone to address this back in 2010, but
apparently it never hit mainline.
Going through the code in sdhci.c, I found this troublesome code-path:
sdhci_do_set_ios() {
spin_lock_irqsave(&host->lock, flags);
...
sdhci_reinit() --> sdhci_init() --> sdhci_do_reset() -->
host->ops->reset() --> sdhci_reset()
...
spin_unlock_irqrestore(&host->lock, flags);
}
And in sdhci_reset(), which may be called with held spinlock:
...
/* Wait max 100 ms */
timeout = 100;
/* hw clears the bit when it's done */
while (sdhci_readb(host, SDHCI_SOFTWARE_RESET) & mask) {
if (timeout == 0) {
pr_err("%s: Reset 0x%x never completed.\n",
mmc_hostname(host->mmc), (int)mask);
sdhci_dumpregs(host);
return;
}
timeout--;
mdelay(1);
}
I am wondering: There either must be a reason this hasn't been fixed in such a
long time, or I am not understanding this correctly, so please enlighten me.
Before trying a cowboy attempt at "fixing" this, I'd really like to know why am
I seeing this?
I mean... how can such a problem get unnoticed and unfixed for so long?
Will an attempt at fixing this issue even be accepted?
Best regards,
--
David Jander
Protonic Holland.
Am Donnerstag, den 17.12.2015, 11:28 +0100 schrieb David Jander:
> Hi all,
>
> I was investigating the source of abnormal irq-latency spikes on an i.MX6
> (ARM) board, and discovered this:
>
> # tracer: preemptirqsoff
> #
> # preemptirqsoff latency trace v1.1.5 on 4.4.0-rc4+
> # --------------------------------------------------------------------
> # latency: 2068 us, #4/4, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:1)
> # -----------------
> # | task: mmcqd/0-92 (uid:0 nice:0 policy:0 rt_prio:0)
> # -----------------
> # => started at: _raw_spin_lock_irqsave
> # => ended at: _raw_spin_unlock_irqrestore
> #
> #
> # _------=> CPU#
> # / _-----=> irqs-off
> # | / _----=> need-resched
> # || / _---=> hardirq/softirq
> # ||| / _--=> preempt-depth
> # |||| / delay
> # cmd pid ||||| time | caller
> # \ / ||||| \ | /
> mmcqd/0-92 0d... 1us#: _raw_spin_lock_irqsave
> mmcqd/0-92 0.n.1 2066us : _raw_spin_unlock_irqrestore
> mmcqd/0-92 0.n.1 2070us+: trace_preempt_on <-_raw_spin_unlock_irqrestore
> mmcqd/0-92 0.n.1 2107us : <stack trace>
> => sdhci_runtime_resume_host
> => __rpm_callback
> => rpm_callback
> => rpm_resume
> => __pm_runtime_resume
> => __mmc_claim_host
> => mmc_blk_issue_rq
> => mmc_queue_thread
> => kthread
> => ret_from_fork
>
> 2 ms with interrupts disabled!!! To much dismay, I later discovered that this
> isn't even the worst case scenario. I also discovered that this has been in
> the kernel for a long time without a fix (I have tested from 3.17 to 4.4-rc4).
> There has been an attempt by someone to address this back in 2010, but
> apparently it never hit mainline.
> Going through the code in sdhci.c, I found this troublesome code-path:
>
> sdhci_do_set_ios() {
> spin_lock_irqsave(&host->lock, flags);
> ...
> sdhci_reinit() --> sdhci_init() --> sdhci_do_reset() -->
> host->ops->reset() --> sdhci_reset()
> ...
> spin_unlock_irqrestore(&host->lock, flags);
> }
>
> And in sdhci_reset(), which may be called with held spinlock:
>
> ...
> /* Wait max 100 ms */
> timeout = 100;
>
> /* hw clears the bit when it's done */
> while (sdhci_readb(host, SDHCI_SOFTWARE_RESET) & mask) {
> if (timeout == 0) {
> pr_err("%s: Reset 0x%x never completed.\n",
> mmc_hostname(host->mmc), (int)mask);
> sdhci_dumpregs(host);
> return;
> }
> timeout--;
> mdelay(1);
> }
>
> I am wondering: There either must be a reason this hasn't been fixed in such a
> long time, or I am not understanding this correctly, so please enlighten me.
> Before trying a cowboy attempt at "fixing" this, I'd really like to know why am
> I seeing this?
> I mean... how can such a problem get unnoticed and unfixed for so long?
> Will an attempt at fixing this issue even be accepted?
>
I would like to see the sdhci spinlock killed and replaced by a mutex
for exactly this reason.
That said, your problem is card polling, when no card is present in the
slot. This is most probably caused by CD gpios having the wrong
polarity.
Regards,
Lucas
--
Pengutronix e.K. | Lucas Stach |
Industrial Linux Solutions | http://www.pengutronix.de/ |
Hi Lucas,
Thanks for reacting.
On Thu, 17 Dec 2015 12:03:10 +0100
Lucas Stach <[email protected]> wrote:
> Am Donnerstag, den 17.12.2015, 11:28 +0100 schrieb David Jander:
> > Hi all,
> >
> > I was investigating the source of abnormal irq-latency spikes on an i.MX6
> > (ARM) board, and discovered this:
> >
> > # tracer: preemptirqsoff
> > #
> > # preemptirqsoff latency trace v1.1.5 on 4.4.0-rc4+
> > # --------------------------------------------------------------------
> > # latency: 2068 us, #4/4, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:1)
> > # -----------------
> > # | task: mmcqd/0-92 (uid:0 nice:0 policy:0 rt_prio:0)
> > # -----------------
> > # => started at: _raw_spin_lock_irqsave
> > # => ended at: _raw_spin_unlock_irqrestore
> > #
> > #
> > # _------=> CPU#
> > # / _-----=> irqs-off
> > # | / _----=> need-resched
> > # || / _---=> hardirq/softirq
> > # ||| / _--=> preempt-depth
> > # |||| / delay
> > # cmd pid ||||| time | caller
> > # \ / ||||| \ | /
> > mmcqd/0-92 0d... 1us#: _raw_spin_lock_irqsave
> > mmcqd/0-92 0.n.1 2066us : _raw_spin_unlock_irqrestore
> > mmcqd/0-92 0.n.1 2070us+: trace_preempt_on
> > <-_raw_spin_unlock_irqrestore mmcqd/0-92 0.n.1 2107us : <stack trace>
> > => sdhci_runtime_resume_host
> > => __rpm_callback
> > => rpm_callback
> > => rpm_resume
> > => __pm_runtime_resume
> > => __mmc_claim_host
> > => mmc_blk_issue_rq
> > => mmc_queue_thread
> > => kthread
> > => ret_from_fork
> >
> > 2 ms with interrupts disabled!!! To much dismay, I later discovered that
> > this isn't even the worst case scenario. I also discovered that this has
> > been in the kernel for a long time without a fix (I have tested from 3.17
> > to 4.4-rc4). There has been an attempt by someone to address this back in
> > 2010, but apparently it never hit mainline.
> > Going through the code in sdhci.c, I found this troublesome code-path:
> >
> > sdhci_do_set_ios() {
> > spin_lock_irqsave(&host->lock, flags);
> > ...
> > sdhci_reinit() --> sdhci_init() --> sdhci_do_reset() -->
> > host->ops->reset() --> sdhci_reset()
> > ...
> > spin_unlock_irqrestore(&host->lock, flags);
> > }
> >
> > And in sdhci_reset(), which may be called with held spinlock:
> >
> > ...
> > /* Wait max 100 ms */
> > timeout = 100;
> >
> > /* hw clears the bit when it's done */
> > while (sdhci_readb(host, SDHCI_SOFTWARE_RESET) & mask) {
> > if (timeout == 0) {
> > pr_err("%s: Reset 0x%x never completed.\n",
> > mmc_hostname(host->mmc), (int)mask);
> > sdhci_dumpregs(host);
> > return;
> > }
> > timeout--;
> > mdelay(1);
> > }
> >
> > I am wondering: There either must be a reason this hasn't been fixed in
> > such a long time, or I am not understanding this correctly, so please
> > enlighten me. Before trying a cowboy attempt at "fixing" this, I'd really
> > like to know why am I seeing this?
> > I mean... how can such a problem get unnoticed and unfixed for so long?
> > Will an attempt at fixing this issue even be accepted?
> >
> I would like to see the sdhci spinlock killed and replaced by a mutex
> for exactly this reason.
>
> That said, your problem is card polling, when no card is present in the
> slot. This is most probably caused by CD gpios having the wrong
> polarity.
... or not having a CD pin at all.
I am using an embedded eMMC chip and a uSD card inserted into a slot. The card
is present and also detected as such. If I never access the card, I see no
spikes (filesystem is mounted but not accessed). If I try to read a file or
directory I get the above trace.
OTOH, if I disable PM functionality in the kernel, the spike is gone, and
worst-case latency is in the 300us range, so I don't think this is related to
card polling.
Best regards,
--
David Jander
Protonic Holland.
Am Donnerstag, den 17.12.2015, 12:20 +0100 schrieb David Jander:
> Hi Lucas,
>
> Thanks for reacting.
>
> On Thu, 17 Dec 2015 12:03:10 +0100
> Lucas Stach <[email protected]> wrote:
>
> > Am Donnerstag, den 17.12.2015, 11:28 +0100 schrieb David Jander:
> > > Hi all,
> > >
> > > I was investigating the source of abnormal irq-latency spikes on an i.MX6
> > > (ARM) board, and discovered this:
> > >
> > > # tracer: preemptirqsoff
> > > #
> > > # preemptirqsoff latency trace v1.1.5 on 4.4.0-rc4+
> > > # --------------------------------------------------------------------
> > > # latency: 2068 us, #4/4, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:1)
> > > # -----------------
> > > # | task: mmcqd/0-92 (uid:0 nice:0 policy:0 rt_prio:0)
> > > # -----------------
> > > # => started at: _raw_spin_lock_irqsave
> > > # => ended at: _raw_spin_unlock_irqrestore
> > > #
> > > #
> > > # _------=> CPU#
> > > # / _-----=> irqs-off
> > > # | / _----=> need-resched
> > > # || / _---=> hardirq/softirq
> > > # ||| / _--=> preempt-depth
> > > # |||| / delay
> > > # cmd pid ||||| time | caller
> > > # \ / ||||| \ | /
> > > mmcqd/0-92 0d... 1us#: _raw_spin_lock_irqsave
> > > mmcqd/0-92 0.n.1 2066us : _raw_spin_unlock_irqrestore
> > > mmcqd/0-92 0.n.1 2070us+: trace_preempt_on
> > > <-_raw_spin_unlock_irqrestore mmcqd/0-92 0.n.1 2107us : <stack trace>
> > > => sdhci_runtime_resume_host
> > > => __rpm_callback
> > > => rpm_callback
> > > => rpm_resume
> > > => __pm_runtime_resume
> > > => __mmc_claim_host
> > > => mmc_blk_issue_rq
> > > => mmc_queue_thread
> > > => kthread
> > > => ret_from_fork
> > >
> > > 2 ms with interrupts disabled!!! To much dismay, I later discovered that
> > > this isn't even the worst case scenario. I also discovered that this has
> > > been in the kernel for a long time without a fix (I have tested from 3.17
> > > to 4.4-rc4). There has been an attempt by someone to address this back in
> > > 2010, but apparently it never hit mainline.
> > > Going through the code in sdhci.c, I found this troublesome code-path:
> > >
> > > sdhci_do_set_ios() {
> > > spin_lock_irqsave(&host->lock, flags);
> > > ...
> > > sdhci_reinit() --> sdhci_init() --> sdhci_do_reset() -->
> > > host->ops->reset() --> sdhci_reset()
> > > ...
> > > spin_unlock_irqrestore(&host->lock, flags);
> > > }
> > >
> > > And in sdhci_reset(), which may be called with held spinlock:
> > >
> > > ...
> > > /* Wait max 100 ms */
> > > timeout = 100;
> > >
> > > /* hw clears the bit when it's done */
> > > while (sdhci_readb(host, SDHCI_SOFTWARE_RESET) & mask) {
> > > if (timeout == 0) {
> > > pr_err("%s: Reset 0x%x never completed.\n",
> > > mmc_hostname(host->mmc), (int)mask);
> > > sdhci_dumpregs(host);
> > > return;
> > > }
> > > timeout--;
> > > mdelay(1);
> > > }
> > >
> > > I am wondering: There either must be a reason this hasn't been fixed in
> > > such a long time, or I am not understanding this correctly, so please
> > > enlighten me. Before trying a cowboy attempt at "fixing" this, I'd really
> > > like to know why am I seeing this?
> > > I mean... how can such a problem get unnoticed and unfixed for so long?
> > > Will an attempt at fixing this issue even be accepted?
> > >
> > I would like to see the sdhci spinlock killed and replaced by a mutex
> > for exactly this reason.
> >
> > That said, your problem is card polling, when no card is present in the
> > slot. This is most probably caused by CD gpios having the wrong
> > polarity.
>
> ... or not having a CD pin at all.
> I am using an embedded eMMC chip and a uSD card inserted into a slot. The card
> is present and also detected as such. If I never access the card, I see no
> spikes (filesystem is mounted but not accessed). If I try to read a file or
> directory I get the above trace.
> OTOH, if I disable PM functionality in the kernel, the spike is gone, and
> worst-case latency is in the 300us range, so I don't think this is related to
> card polling.
>
You may be right. If the SDHCI host gets runtime suspended it needs a
reset after waking up, causing this latency.
Killing the sdhci spinlock may not be straight forward, as a lot of code
paths need to be audited for irq safety, but it's the only thing to fix
this properly.
Regards,
Lucas
--
Pengutronix e.K. | Lucas Stach |
Industrial Linux Solutions | http://www.pengutronix.de/ |
On 17 December 2015 at 12:27, Lucas Stach <[email protected]> wrote:
> Am Donnerstag, den 17.12.2015, 12:20 +0100 schrieb David Jander:
>> Hi Lucas,
>>
>> Thanks for reacting.
>>
>> On Thu, 17 Dec 2015 12:03:10 +0100
>> Lucas Stach <[email protected]> wrote:
>>
>> > Am Donnerstag, den 17.12.2015, 11:28 +0100 schrieb David Jander:
>> > > Hi all,
>> > >
>> > > I was investigating the source of abnormal irq-latency spikes on an i.MX6
>> > > (ARM) board, and discovered this:
>> > >
>> > > # tracer: preemptirqsoff
>> > > #
>> > > # preemptirqsoff latency trace v1.1.5 on 4.4.0-rc4+
>> > > # --------------------------------------------------------------------
>> > > # latency: 2068 us, #4/4, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:1)
>> > > # -----------------
>> > > # | task: mmcqd/0-92 (uid:0 nice:0 policy:0 rt_prio:0)
>> > > # -----------------
>> > > # => started at: _raw_spin_lock_irqsave
>> > > # => ended at: _raw_spin_unlock_irqrestore
>> > > #
>> > > #
>> > > # _------=> CPU#
>> > > # / _-----=> irqs-off
>> > > # | / _----=> need-resched
>> > > # || / _---=> hardirq/softirq
>> > > # ||| / _--=> preempt-depth
>> > > # |||| / delay
>> > > # cmd pid ||||| time | caller
>> > > # \ / ||||| \ | /
>> > > mmcqd/0-92 0d... 1us#: _raw_spin_lock_irqsave
>> > > mmcqd/0-92 0.n.1 2066us : _raw_spin_unlock_irqrestore
>> > > mmcqd/0-92 0.n.1 2070us+: trace_preempt_on
>> > > <-_raw_spin_unlock_irqrestore mmcqd/0-92 0.n.1 2107us : <stack trace>
>> > > => sdhci_runtime_resume_host
>> > > => __rpm_callback
>> > > => rpm_callback
>> > > => rpm_resume
>> > > => __pm_runtime_resume
>> > > => __mmc_claim_host
>> > > => mmc_blk_issue_rq
>> > > => mmc_queue_thread
>> > > => kthread
>> > > => ret_from_fork
>> > >
>> > > 2 ms with interrupts disabled!!! To much dismay, I later discovered that
>> > > this isn't even the worst case scenario. I also discovered that this has
>> > > been in the kernel for a long time without a fix (I have tested from 3.17
>> > > to 4.4-rc4). There has been an attempt by someone to address this back in
>> > > 2010, but apparently it never hit mainline.
>> > > Going through the code in sdhci.c, I found this troublesome code-path:
>> > >
>> > > sdhci_do_set_ios() {
>> > > spin_lock_irqsave(&host->lock, flags);
>> > > ...
>> > > sdhci_reinit() --> sdhci_init() --> sdhci_do_reset() -->
>> > > host->ops->reset() --> sdhci_reset()
>> > > ...
>> > > spin_unlock_irqrestore(&host->lock, flags);
>> > > }
>> > >
>> > > And in sdhci_reset(), which may be called with held spinlock:
>> > >
>> > > ...
>> > > /* Wait max 100 ms */
>> > > timeout = 100;
>> > >
>> > > /* hw clears the bit when it's done */
>> > > while (sdhci_readb(host, SDHCI_SOFTWARE_RESET) & mask) {
>> > > if (timeout == 0) {
>> > > pr_err("%s: Reset 0x%x never completed.\n",
>> > > mmc_hostname(host->mmc), (int)mask);
>> > > sdhci_dumpregs(host);
>> > > return;
>> > > }
>> > > timeout--;
>> > > mdelay(1);
>> > > }
>> > >
>> > > I am wondering: There either must be a reason this hasn't been fixed in
>> > > such a long time, or I am not understanding this correctly, so please
>> > > enlighten me. Before trying a cowboy attempt at "fixing" this, I'd really
>> > > like to know why am I seeing this?
>> > > I mean... how can such a problem get unnoticed and unfixed for so long?
>> > > Will an attempt at fixing this issue even be accepted?
>> > >
>> > I would like to see the sdhci spinlock killed and replaced by a mutex
>> > for exactly this reason.
>> >
>> > That said, your problem is card polling, when no card is present in the
>> > slot. This is most probably caused by CD gpios having the wrong
>> > polarity.
>>
>> ... or not having a CD pin at all.
>> I am using an embedded eMMC chip and a uSD card inserted into a slot. The card
>> is present and also detected as such. If I never access the card, I see no
>> spikes (filesystem is mounted but not accessed). If I try to read a file or
>> directory I get the above trace.
>> OTOH, if I disable PM functionality in the kernel, the spike is gone, and
>> worst-case latency is in the 300us range, so I don't think this is related to
>> card polling.
>>
> You may be right. If the SDHCI host gets runtime suspended it needs a
> reset after waking up, causing this latency.
>
> Killing the sdhci spinlock may not be straight forward, as a lot of code
> paths need to be audited for irq safety, but it's the only thing to fix
> this properly.
>
> Regards,
> Lucas
>
If/when you decide to fix this issue. Please keep in mind the following things.
- Try to convert the SDHCI into a pure library. No more quirks or callbacks.
- I assume we can simplify lots of code if we convert SDHCI into using
a threaded IRQ in favour of the tasklet.
Any patches that moves SDHCI into this direction will be greatly appreciated!
Kind regards
Uffe
On Thu, 17 Dec 2015 12:39:20 +0100
Ulf Hansson <[email protected]> wrote:
> On 17 December 2015 at 12:27, Lucas Stach <[email protected]> wrote:
> > Am Donnerstag, den 17.12.2015, 12:20 +0100 schrieb David Jander:
> >> Hi Lucas,
> >>
> >> Thanks for reacting.
> >>
> >> On Thu, 17 Dec 2015 12:03:10 +0100
> >> Lucas Stach <[email protected]> wrote:
> >>
> >> > Am Donnerstag, den 17.12.2015, 11:28 +0100 schrieb David Jander:
> >> > > Hi all,
> >> > >
> >> > > I was investigating the source of abnormal irq-latency spikes on an
> >> > > i.MX6 (ARM) board, and discovered this:
> >> > >
> >> > > # tracer: preemptirqsoff
> >> > > #
> >> > > # preemptirqsoff latency trace v1.1.5 on 4.4.0-rc4+
> >> > > # --------------------------------------------------------------------
> >> > > # latency: 2068 us, #4/4, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0
> >> > > #P:1) # -----------------
> >> > > # | task: mmcqd/0-92 (uid:0 nice:0 policy:0 rt_prio:0)
> >> > > # -----------------
> >> > > # => started at: _raw_spin_lock_irqsave
> >> > > # => ended at: _raw_spin_unlock_irqrestore
> >> > > #
> >> > > #
> >> > > # _------=> CPU#
> >> > > # / _-----=> irqs-off
> >> > > # | / _----=> need-resched
> >> > > # || / _---=> hardirq/softirq
> >> > > # ||| / _--=> preempt-depth
> >> > > # |||| / delay
> >> > > # cmd pid ||||| time | caller
> >> > > # \ / ||||| \ | /
> >> > > mmcqd/0-92 0d... 1us#: _raw_spin_lock_irqsave
> >> > > mmcqd/0-92 0.n.1 2066us : _raw_spin_unlock_irqrestore
> >> > > mmcqd/0-92 0.n.1 2070us+: trace_preempt_on
> >> > > <-_raw_spin_unlock_irqrestore mmcqd/0-92 0.n.1 2107us : <stack
> >> > > trace> => sdhci_runtime_resume_host
> >> > > => __rpm_callback
> >> > > => rpm_callback
> >> > > => rpm_resume
> >> > > => __pm_runtime_resume
> >> > > => __mmc_claim_host
> >> > > => mmc_blk_issue_rq
> >> > > => mmc_queue_thread
> >> > > => kthread
> >> > > => ret_from_fork
> >> > >
> >> > > 2 ms with interrupts disabled!!! To much dismay, I later discovered
> >> > > that this isn't even the worst case scenario. I also discovered that
> >> > > this has been in the kernel for a long time without a fix (I have
> >> > > tested from 3.17 to 4.4-rc4). There has been an attempt by someone to
> >> > > address this back in 2010, but apparently it never hit mainline.
> >> > > Going through the code in sdhci.c, I found this troublesome code-path:
> >> > >
> >> > > sdhci_do_set_ios() {
> >> > > spin_lock_irqsave(&host->lock, flags);
> >> > > ...
> >> > > sdhci_reinit() --> sdhci_init() --> sdhci_do_reset() -->
> >> > > host->ops->reset() --> sdhci_reset()
> >> > > ...
> >> > > spin_unlock_irqrestore(&host->lock, flags);
> >> > > }
> >> > >
> >> > > And in sdhci_reset(), which may be called with held spinlock:
> >> > >
> >> > > ...
> >> > > /* Wait max 100 ms */
> >> > > timeout = 100;
> >> > >
> >> > > /* hw clears the bit when it's done */
> >> > > while (sdhci_readb(host, SDHCI_SOFTWARE_RESET) & mask) {
> >> > > if (timeout == 0) {
> >> > > pr_err("%s: Reset 0x%x never completed.\n",
> >> > > mmc_hostname(host->mmc), (int)mask);
> >> > > sdhci_dumpregs(host);
> >> > > return;
> >> > > }
> >> > > timeout--;
> >> > > mdelay(1);
> >> > > }
> >> > >
> >> > > I am wondering: There either must be a reason this hasn't been fixed
> >> > > in such a long time, or I am not understanding this correctly, so
> >> > > please enlighten me. Before trying a cowboy attempt at "fixing" this,
> >> > > I'd really like to know why am I seeing this?
> >> > > I mean... how can such a problem get unnoticed and unfixed for so
> >> > > long? Will an attempt at fixing this issue even be accepted?
> >> > >
> >> > I would like to see the sdhci spinlock killed and replaced by a mutex
> >> > for exactly this reason.
> >> >
> >> > That said, your problem is card polling, when no card is present in the
> >> > slot. This is most probably caused by CD gpios having the wrong
> >> > polarity.
> >>
> >> ... or not having a CD pin at all.
> >> I am using an embedded eMMC chip and a uSD card inserted into a slot. The
> >> card is present and also detected as such. If I never access the card, I
> >> see no spikes (filesystem is mounted but not accessed). If I try to read
> >> a file or directory I get the above trace.
> >> OTOH, if I disable PM functionality in the kernel, the spike is gone, and
> >> worst-case latency is in the 300us range, so I don't think this is
> >> related to card polling.
> >>
> > You may be right. If the SDHCI host gets runtime suspended it needs a
> > reset after waking up, causing this latency.
> >
> > Killing the sdhci spinlock may not be straight forward, as a lot of code
> > paths need to be audited for irq safety, but it's the only thing to fix
> > this properly.
> >
> > Regards,
> > Lucas
> >
>
> If/when you decide to fix this issue. Please keep in mind the following
> things.
>
> - Try to convert the SDHCI into a pure library. No more quirks or callbacks.
> - I assume we can simplify lots of code if we convert SDHCI into using
> a threaded IRQ in favour of the tasklet.
>
> Any patches that moves SDHCI into this direction will be greatly appreciated!
Ok, this sounds like a good way to go. Unfortunately it also sounds like a
major endeavor, for which good knowledge of the SDHCI standard is necessary.
This knowledge is based on documentation that is not openly available without
cost AFAIK. This probably also explains why there hasn't been a real fix ever.
On top of that, the whole sdhci code is unmaintained currently as it seems.
I was studying the code a bit more, and I now understand that I am not even
close to having the experience and standards-knowledge it takes to pull this
off reliably. I guess the one who takes on this task may as well become
official maintainer afterwards...
OTOH, we pretty much depend on this driver now, since all of our new i.MX6/7
boards have eMMC flash. We also use the flexcan peripheral on all designs,
which is specially sensible to these latency spikes, so we will have to do
something on the long run.... we cannot live forever with disabled PM ;-)
Best regards,
--
David Jander
Protonic Holland.
[...]
>> If/when you decide to fix this issue. Please keep in mind the following
>> things.
>>
>> - Try to convert the SDHCI into a pure library. No more quirks or callbacks.
>> - I assume we can simplify lots of code if we convert SDHCI into using
>> a threaded IRQ in favour of the tasklet.
>>
>> Any patches that moves SDHCI into this direction will be greatly appreciated!
>
> Ok, this sounds like a good way to go. Unfortunately it also sounds like a
> major endeavor, for which good knowledge of the SDHCI standard is necessary.
> This knowledge is based on documentation that is not openly available without
> cost AFAIK. This probably also explains why there hasn't been a real fix ever.
> On top of that, the whole sdhci code is unmaintained currently as it seems.
> I was studying the code a bit more, and I now understand that I am not even
> close to having the experience and standards-knowledge it takes to pull this
> off reliably. I guess the one who takes on this task may as well become
> official maintainer afterwards...
You are right, a maintainer is needed for sdhci.
Also, I am a bit surprised that none have stepped up, especially since
it's indeed being *very* widely used.
> OTOH, we pretty much depend on this driver now, since all of our new i.MX6/7
> boards have eMMC flash. We also use the flexcan peripheral on all designs,
> which is specially sensible to these latency spikes, so we will have to do
> something on the long run.... we cannot live forever with disabled PM ;-)
>
Unfortunate, PM is only one of the problems.
The code is in general fragile. We have have kind of reached the
point, when I apply changes that fixes one issue it may cause another.
Kind regards
Uffe
Dear Ulf,
On Thu, 17 Dec 2015 15:54:54 +0100
Ulf Hansson <[email protected]> wrote:
> [...]
>
> >> If/when you decide to fix this issue. Please keep in mind the following
> >> things.
> >>
> >> - Try to convert the SDHCI into a pure library. No more quirks or
> >> callbacks.
> >> - I assume we can simplify lots of code if we convert SDHCI into using
> >> a threaded IRQ in favour of the tasklet.
> >>
> >> Any patches that moves SDHCI into this direction will be greatly
> >> appreciated!
> >
> > Ok, this sounds like a good way to go. Unfortunately it also sounds like a
> > major endeavor, for which good knowledge of the SDHCI standard is
> > necessary. This knowledge is based on documentation that is not openly
> > available without cost AFAIK. This probably also explains why there hasn't
> > been a real fix ever. On top of that, the whole sdhci code is unmaintained
> > currently as it seems. I was studying the code a bit more, and I now
> > understand that I am not even close to having the experience and
> > standards-knowledge it takes to pull this off reliably. I guess the one
> > who takes on this task may as well become official maintainer afterwards...
>
> You are right, a maintainer is needed for sdhci.
>
> Also, I am a bit surprised that none have stepped up, especially since
> it's indeed being *very* widely used.
So, you probably understand my surprise as for the state of things. I am only
casually walking by because I have a latency problem....
> > OTOH, we pretty much depend on this driver now, since all of our new
> > i.MX6/7 boards have eMMC flash. We also use the flexcan peripheral on all
> > designs, which is specially sensible to these latency spikes, so we will
> > have to do something on the long run.... we cannot live forever with
> > disabled PM ;-)
> >
>
> Unfortunate, PM is only one of the problems.
I already had that suspicion while looking at the code...
> The code is in general fragile. We have have kind of reached the
> point, when I apply changes that fixes one issue it may cause another.
Oh, that is indeed bad.
I wish I was in the position to do this... but this really goes beyond my time
and my knowledge. I think most of the effort will be at cleaning up the mess
and make sure that each one of the many users works well afterwards, and it
definitely takes someone who knows the code (and it's users) very well to pull
this off.
Best regards,
--
David Jander
Protonic Holland.
On Thu, Dec 17, 2015 at 01:22:29PM +0100, David Jander wrote:
> Ok, this sounds like a good way to go. Unfortunately it also sounds like a
> major endeavor, for which good knowledge of the SDHCI standard is necessary.
That's not entirely true.
Anyone who is really good at programming can fix this: it's a matter of
making changes via a series of code transformations which allow you to
reach a goal.
As far as I can see, there's two solutions to SDHCI:
1. We chuck the existing crappy driver away and start over.
2. We change the existing driver to improve it, which requires the
transformation approach.
When I say "transformation", it's about making just one change at a time,
such as creating a new function which contains shared code, and then
arranging for it to be called.
The series I did (starting at da91a8f9c0f5) is most likely done via
this method - when modifying a complex driver, I think it's the only
way to make changes safely. The approach has many advantages, the
most important is that the changes should look obvious and trivial,
even though the sum of the changes may be complex.
--
RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.
On Thu, Dec 17, 2015 at 04:09:03PM +0100, David Jander wrote:
>
> Dear Ulf,
>
> On Thu, 17 Dec 2015 15:54:54 +0100
> Ulf Hansson <[email protected]> wrote:
> > The code is in general fragile. We have have kind of reached the
> > point, when I apply changes that fixes one issue it may cause another.
>
> Oh, that is indeed bad.
> I wish I was in the position to do this... but this really goes beyond my time
> and my knowledge. I think most of the effort will be at cleaning up the mess
> and make sure that each one of the many users works well afterwards, and it
> definitely takes someone who knows the code (and it's users) very well to pull
> this off.
By way of illustration, when SolidRun released their Hummingboards, I
spent a while working with Jon Nettleton getting UHS support going on
these boards (which initially required a hardware hack.) I put
together a patch set which resolved all the problems that there were
at that time.
Yesterday, I tried adding the necessary bits to DT, as people have
started reporting _one_ problem with the SDHCI driver with UHS cards.
What I found dismayed me: I found a total of six problems or variable
severity in around two hours.
This evening, I'm starting to work through those problems, starting
with the damned trivial ones where printk()s are inappropriately
noisy, which can only come down to a lack of review of submissions
before applying changes. Some of these are core MMC code, so I'm
afraid they aren't attributable to "no sdhci maintainer".
Eg, this one is particularly annoying as it takes something like 23
attempts to probe one of two SDHCI interfaces, so we end up with
24 "voltage-ranges unspecified" messages at boot time.
Author: Russell King <[email protected]>
Date: Fri Dec 18 19:57:05 2015 +0000
mmc: core: shut up "voltage-ranges unspecified" pr_info()
Each time a driver such as sdhci-esdhc-imx is probed, we get a info
printk complaining that the DT voltage-ranges property has not been
specified.
However, the DT binding specifically says that the voltage-ranges
property is optional. That means we should not be complaining that
DT hasn't specified this property: by indicating that it's optional,
it is valid not to have the property in DT.
Silence the warning if the property is missing.
Signed-off-by: Russell King <[email protected]>
diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index 5ae89e48fd85..b5e663b67cb5 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -1220,8 +1220,12 @@ int mmc_of_parse_voltage(struct device_node *np, u32 *mask)
voltage_ranges = of_get_property(np, "voltage-ranges", &num_ranges);
num_ranges = num_ranges / sizeof(*voltage_ranges) / 2;
- if (!voltage_ranges || !num_ranges) {
- pr_info("%s: voltage-ranges unspecified\n", np->full_name);
+ if (!voltage_ranges) {
+ pr_debug("%s: voltage-ranges unspecified\n", np->full_name);
+ return -EINVAL;
+ }
+ if (!num_ranges) {
+ pr_err("%s: voltage-ranges empty\n", np->full_name);
return -EINVAL;
}
--
RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.