On 16/11/2017 16:36, Russell King - ARM Linux wrote:
> On Thu, Nov 16, 2017 at 04:26:51PM +0100, Marc Gonzalez wrote:
>> On 15/11/2017 14:13, Russell King - ARM Linux wrote:
>>
>>> udelay() needs to offer a consistent interface so that drivers know
>>> what to expect no matter what the implementation is. Making one
>>> implementation conform to your ideas while leaving the other
>>> implementations with other expectations is a recipe for bugs.
>>>
>>> If you really want to do this, fix the loops_per_jiffy implementation
>>> as well so that the consistency is maintained.
>>
>> Hello Russell,
>>
>> It seems to me that, when using DFS, there's a serious issue with loop-based
>> delays. (IIRC, it was you who pointed this out a few years ago.)
>>
>> If I'm reading arch/arm/kernel/smp.c correctly, loops_per_jiffy is scaled
>> when the frequency changes.
>>
>> But arch/arm/lib/delay-loop.S starts by loading the current value of
>> loops_per_jiffy, computes the number of times to loop, and then loops.
>> If the frequency increases when the core is in __loop_delay, the
>> delay will be much shorter than requested.
>>
>> Is this a correct assessment of the situation?
>
> Absolutely correct, and it's something that people are aware of, and
> have already catered for while writing their drivers.
In their cpufreq driver?
In "real" device drivers that happen to use delays?
On my system, the CPU frequency may ramp up from 120 MHz to 1.2 GHz.
If the frequency increases at the beginning of __loop_delay, udelay(100)
would spin only 10 microseconds. This is likely to cause issues in
any driver using udelay.
How does one cater for that?
Regards.
From 1584251322874761311@xxx Thu Nov 16 19:16:37 +0000 2017
X-GM-THRID: 1582790467810046578
X-Gmail-Labels: Inbox,Category Forums,HistoricalUnread