2004-09-09 13:52:49

by Henry Margies

[permalink] [raw]
Subject: Is there a problem in timeval_to_jiffies?

Hallo.


I'm working on an arm based embedded device running kernel 2.6.9.
I asked this question also on the arm mailing list, but nobody
could answer me my questions there, so I will try here :)

I have some problems with itimers. For example, if I set up a
timer using a period of 20ms, the system needs 30ms to send the
signal. I figured out, that it needs always 10ms more than I
want.

The problem seems to be located in the timeval_to_jiffies()
function.

In function do_setitimer() the following calculation is done:

i = timeval_to_jiffies(&value->it_interval);

... where i is the interval for my timer. The problem is, that
for it_interval = 0 seconds and 20000 microseconds, i = 3. But
shouldn't it be 2? It looks like, the problem is somewhere in
here (timeval_to_jiffies()):

return (((u64)sec * SEC_CONVERSION) +
(((u64)usec * USEC_CONVERSION + USEC_ROUND) >>
(USEC_JIFFIE_SC - SEC_JIFFIE_SC))) >>
SEC_JIFFIE_SC;

I don't understand all of the formula in detail. But for me, it
looks like the problem is in USEC_ROUND.

Any ideas?

Thx in advance,
Henry

--

Hi! I'm a .signature virus! Copy me into your
~/.signature to help me spread!


2004-09-12 14:37:21

by Henry Margies

[permalink] [raw]
Subject: Re: Is there a problem in timeval_to_jiffies?

Hello


Why is nobody answering my question? I tested my application also on
x86. The result is the same. For me, it looks like there is a problem.
The only difference is, that my x86 has a TICK value of 1ms and my arm
device a value of 10ms

Imagine, there are 3 timers.

timer1 is for 1s,
timer2 is for 0.1s,
timer3 is for 0.01s.

Now, timer1 should finish after 10 times of timer2 and 100 times of
timer3. But this is not, because every interval is 1ms (10ms on arm)
longer than it should be.

(on x86)
timer1 finishes after 1001ms,
timer2 after 10*101ms = 1010ms,
timer3 after 100*11ms = 1100ms

(on arm)
timer1 finishes after 1010ms,
timer2 after 10*110ms = 1100ms,
timer3 after 100*20ms = 2000ms!!!

The output of my test application is the following on x86:

(timer1)
TIMER_INTERVAL =1000ms
COUNTER =1
expected elapsed time =1000ms
elapsed time =1000ms and 845ns

(timer2)
TIMER_INTERVAL =100ms
COUNTER =10
expected elapsed time =1000ms
elapsed time =1010ms and 29ns

(timer3)
TIMER_INTERVAL =10ms
COUNTER =100
expected elapsed time =1000ms
elapsed time =1099ms and 744ns


Please have a look into my test application:

void sig_alarm(int i)
{
struct timeval tv;

gettimeofday(&tv, NULL);

if (c>=COUNTER) {
int elapsed;
c = 0;
elapsed = (tv.tv_sec-start.tv_sec)*1000000
+ tv.tv_usec-start.tv_usec;

printf( "TIMER_INTERVAL =%dms\n"
"COUNTER =%d\n"
"expected elapsed time =%dms\n",
TIMER_INTERVAL,
COUNTER,
TIMER_INTERVAL*COUNTER);

printf("elapsed time =%dms and %dns\n\n\n",
elapsed/1000, elapsed%1000);

}

if (!c)
start = tv;

c++;

}

int main()
{
struct itimerval itimer;

itimer.it_interval.tv_sec = 0;
itimer.it_interval.tv_usec= TIMER_INTERVAL*1000;

itimer.it_value.tv_sec = 0;
itimer.it_value.tv_usec= TIMER_INTERVAL*1000;

signal(SIGALRM, sig_alarm);

setitimer(ITIMER_REAL, &itimer, NULL);

getc(stdin);

return 0;
}


As I wrote, I think the problem is in timeval_to_jiffies. On my arm
device 10ms are converted to 20ticks. On x86, 10ms are converted to
11ticks.

Can somebody agree on that or at least point me to my mistakes?

Thx in advance,

Henry

--

Hi! I'm a .signature virus! Copy me into your
~/.signature to help me spread!

2004-09-16 03:31:54

by Randy.Dunlap

[permalink] [raw]
Subject: Re: Is there a problem in timeval_to_jiffies?

On Sun, 12 Sep 2004 16:33:19 +0200 Henry Margies wrote:

| Hello
|
| Why is nobody answering my question? I tested my application also on
| x86. The result is the same. For me, it looks like there is a problem.
| The only difference is, that my x86 has a TICK value of 1ms and my arm
| device a value of 10ms
|
| Imagine, there are 3 timers.
|
| timer1 is for 1s,
| timer2 is for 0.1s,
| timer3 is for 0.01s.
|
| Now, timer1 should finish after 10 times of timer2 and 100 times of
| timer3. But this is not, because every interval is 1ms (10ms on arm)
| longer than it should be.
|
| (on x86)
| timer1 finishes after 1001ms,
| timer2 after 10*101ms = 1010ms,
| timer3 after 100*11ms = 1100ms
|
| (on arm)
| timer1 finishes after 1010ms,
| timer2 after 10*110ms = 1100ms,
| timer3 after 100*20ms = 2000ms!!!
|
| The output of my test application is the following on x86:
|
| (timer1)
| TIMER_INTERVAL =1000ms
| COUNTER =1
| expected elapsed time =1000ms
| elapsed time =1000ms and 845ns
|
| (timer2)
| TIMER_INTERVAL =100ms
| COUNTER =10
| expected elapsed time =1000ms
| elapsed time =1010ms and 29ns
|
| (timer3)
| TIMER_INTERVAL =10ms
| COUNTER =100
| expected elapsed time =1000ms
| elapsed time =1099ms and 744ns
|
|
| Please have a look into my test application:
|
| void sig_alarm(int i)
| {
| struct timeval tv;
|
| gettimeofday(&tv, NULL);
|
| if (c>=COUNTER) {
| int elapsed;
| c = 0;
| elapsed = (tv.tv_sec-start.tv_sec)*1000000
| + tv.tv_usec-start.tv_usec;
|
| printf( "TIMER_INTERVAL =%dms\n"
| "COUNTER =%d\n"
| "expected elapsed time =%dms\n",
| TIMER_INTERVAL,
| COUNTER,
| TIMER_INTERVAL*COUNTER);
|
| printf("elapsed time =%dms and %dns\n\n\n",
| elapsed/1000, elapsed%1000);
|
| }
|
| if (!c)
| start = tv;
|
| c++;
|
| }
|
| int main()
| {
| struct itimerval itimer;
|
| itimer.it_interval.tv_sec = 0;
| itimer.it_interval.tv_usec= TIMER_INTERVAL*1000;
|
| itimer.it_value.tv_sec = 0;
| itimer.it_value.tv_usec= TIMER_INTERVAL*1000;
|
| signal(SIGALRM, sig_alarm);
|
| setitimer(ITIMER_REAL, &itimer, NULL);
|
| getc(stdin);
|
| return 0;
| }
|
|
| As I wrote, I think the problem is in timeval_to_jiffies. On my arm
| device 10ms are converted to 20ticks. On x86, 10ms are converted to
| 11ticks.
|
| Can somebody agree on that or at least point me to my mistakes?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

I agree that timeval_to_jiffies() has some serious rounding errors.
I don't see why it even cares about any of the scaled math in the
(inline) function. I rewrote it (for userspace, not kernelspace)
like so, with expected results:


static __inline__ unsigned long
tv_to_jifs(const struct timeval *value)
{
unsigned long sec = value->tv_sec;
long usec = value->tv_usec;

if (sec >= MAX_SEC_IN_JIFFIES){
sec = MAX_SEC_IN_JIFFIES;
usec = 0;
}
return (((u64)sec * (u64)HZ) +
(((u64)usec + (u64)HZ - 1LL) / (unsigned long)HZ));
}


Results of timeval_to_jiffies() compared to tv_to_jifs() [small sample]:
(tv_sec is fixed at 5, with tv_usec varying)

+--- timeval_to_jiffies()
V v--- tv_to_jifs()
tv_usec: 499000, jifs: 5500, jf2: 5499
tv_usec: 499100, jifs: 5500, jf2: 5500
tv_usec: 499900, jifs: 5501, jf2: 5500
tv_usec: 500000, jifs: 5501, jf2: 5500
tv_usec: 500100, jifs: 5501, jf2: 5501
tv_usec: 500900, jifs: 5502, jf2: 5501
tv_usec: 501000, jifs: 5502, jf2: 5501
tv_usec: 501100, jifs: 5502, jf2: 5502
tv_usec: 501900, jifs: 5503, jf2: 5502
tv_usec: 502000, jifs: 5503, jf2: 5502
tv_usec: 502100, jifs: 5503, jf2: 5503



I think that tv_to_jifs() can be written for kernel use by using
do_div(), but I haven't tried that yet.

--
~Randy

2004-09-16 09:54:55

by George Anzinger

[permalink] [raw]
Subject: Re: Is there a problem in timeval_to_jiffies?

Randy.Dunlap wrote:
> On Sun, 12 Sep 2004 16:33:19 +0200 Henry Margies wrote:
>
> | Hello
> |
> | Why is nobody answering my question? I tested my application also on
> | x86. The result is the same. For me, it looks like there is a problem.
> | The only difference is, that my x86 has a TICK value of 1ms and my arm
> | device a value of 10ms

You, I think, send a bug report. I replied via bugz. The open question is what
value your particular arm platform is using for CLOCK_TICK_RATE. See below.
> |
> | Imagine, there are 3 timers.
> |
> | timer1 is for 1s,
> | timer2 is for 0.1s,
> | timer3 is for 0.01s.
> |
> | Now, timer1 should finish after 10 times of timer2 and 100 times of
> | timer3. But this is not, because every interval is 1ms (10ms on arm)
> | longer than it should be.

Timers are constrained by the standard to NEVER finish early. This means that,
in order to account for the timer starting between two jiffies, an extra jiffie
needs to be added to the value. This will cause a timer to expire sometime
between the value asked for and that value + the resolution. The resolution is
roughly 1/HZ, but this value is not exact. For example, in the 2.6 x86 kernel
the CLOCK_TICK_RATE constrains the resolution (also the tick size) for HZ=1000
to be 999849 nanoseconds. With a tick of this size the best we can do with each
of these values is:
.01s 10.998ms
.1s 100.9847ms
1s 1000.8488ms
> |
> | (on x86)
> | timer1 finishes after 1001ms,
> | timer2 after 10*101ms = 1010ms,
> | timer3 after 100*11ms = 1100ms
> |
> | (on arm)
> | timer1 finishes after 1010ms,
> | timer2 after 10*110ms = 1100ms,
> | timer3 after 100*20ms = 2000ms!!!
> |
> | The output of my test application is the following on x86:
> |
> | (timer1)
> | TIMER_INTERVAL =1000ms
> | COUNTER =1
> | expected elapsed time =1000ms
> | elapsed time =1000ms and 845ns
1000.8488 expected That number looks a few nanoseconds too small.
> |
> | (timer2)
> | TIMER_INTERVAL =100ms
> | COUNTER =10
> | expected elapsed time =1000ms
> | elapsed time =1010ms and 29ns
10 * 100.9847ms is 1009.847ms Looks good.
> |
> | (timer3)
> | TIMER_INTERVAL =10ms
> | COUNTER =100
> | expected elapsed time =1000ms
> | elapsed time =1099ms and 744ns
100 * 10.998ms is 1099.8 This also looks good.
> |
> |
> | Please have a look into my test application:
> |
> | void sig_alarm(int i)
> | {
> | struct timeval tv;
> |
> | gettimeofday(&tv, NULL);
> |
> | if (c>=COUNTER) {
> | int elapsed;
> | c = 0;
> | elapsed = (tv.tv_sec-start.tv_sec)*1000000
> | + tv.tv_usec-start.tv_usec;
> |
> | printf( "TIMER_INTERVAL =%dms\n"
> | "COUNTER =%d\n"
> | "expected elapsed time =%dms\n",
> | TIMER_INTERVAL,
> | COUNTER,
> | TIMER_INTERVAL*COUNTER);
> |
> | printf("elapsed time =%dms and %dns\n\n\n",
> | elapsed/1000, elapsed%1000);
> |
> | }
> |
> | if (!c)
> | start = tv;
> |
> | c++;
> |
> | }
> |
> | int main()
> | {
> | struct itimerval itimer;
> |
> | itimer.it_interval.tv_sec = 0;
> | itimer.it_interval.tv_usec= TIMER_INTERVAL*1000;
> |
> | itimer.it_value.tv_sec = 0;
> | itimer.it_value.tv_usec= TIMER_INTERVAL*1000;
> |
> | signal(SIGALRM, sig_alarm);
> |
> | setitimer(ITIMER_REAL, &itimer, NULL);
> |
> | getc(stdin);
> |
> | return 0;
> | }
> |
> |
> | As I wrote, I think the problem is in timeval_to_jiffies. On my arm
> | device 10ms are converted to 20ticks. On x86, 10ms are converted to
> | 11ticks.
For the x86 this is correct as 10 ticks would be 9.99849 ms which is less than
the asked for 10ms. As to the ARM, we need to know the CLOCK_TICK_RATE. This
is used to determine the actual tick size using the following:

#define LATCH ((CLOCK_TICK_RATE + HZ/2) / HZ) /* For divider */
#define SH_DIV(NOM,DEN,LSH) ( ((NOM / DEN) << LSH) \
+ (((NOM % DEN) << LSH) + DEN / 2) / DEN)

/* HZ is the requested value. ACTHZ is actual HZ ("<< 8" is for accuracy) */
#define ACTHZ (SH_DIV (CLOCK_TICK_RATE, LATCH, 8))
/* TICK_NSEC is the time between ticks in nsec assuming real ACTHZ and */
#define TICK_NSEC (SH_DIV (1000000UL * 1000, ACTHZ, 8))

TICK_NSEC is then used in the conversion code.

> |
> | Can somebody agree on that or at least point me to my mistakes?
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>
> I agree that timeval_to_jiffies() has some serious rounding errors.
> I don't see why it even cares about any of the scaled math in the
> (inline) function. I rewrote it (for userspace, not kernelspace)
> like so, with expected results:
>
What you are missing here is that the tick size for HZ=1000 is 999849 nano
seconds. THIS is why the scaled math was done.
>
> static __inline__ unsigned long
> tv_to_jifs(const struct timeval *value)
> {
> unsigned long sec = value->tv_sec;
> long usec = value->tv_usec;
>
> if (sec >= MAX_SEC_IN_JIFFIES){
> sec = MAX_SEC_IN_JIFFIES;
> usec = 0;
> }
> return (((u64)sec * (u64)HZ) +
> (((u64)usec + (u64)HZ - 1LL) / (unsigned long)HZ));
> }
>
>
> Results of timeval_to_jiffies() compared to tv_to_jifs() [small sample]:
> (tv_sec is fixed at 5, with tv_usec varying)
>
> +--- timeval_to_jiffies()
> V v--- tv_to_jifs()
> tv_usec: 499000, jifs: 5500, jf2: 5499
> tv_usec: 499100, jifs: 5500, jf2: 5500
> tv_usec: 499900, jifs: 5501, jf2: 5500
> tv_usec: 500000, jifs: 5501, jf2: 5500
> tv_usec: 500100, jifs: 5501, jf2: 5501
> tv_usec: 500900, jifs: 5502, jf2: 5501
> tv_usec: 501000, jifs: 5502, jf2: 5501
> tv_usec: 501100, jifs: 5502, jf2: 5502
> tv_usec: 501900, jifs: 5503, jf2: 5502
> tv_usec: 502000, jifs: 5503, jf2: 5502
> tv_usec: 502100, jifs: 5503, jf2: 5503
>
>
>
> I think that tv_to_jifs() can be written for kernel use by using
> do_div(), but I haven't tried that yet.

do_div() (or any div) is very expensive. The scaled math is much faster and
retains all the precision we need. The errors are in the 2 digits of parts per
billion (like 55 ppb).
>
If you would like I could send you the test code I used to test the conversion
functions.
--
George Anzinger [email protected]
High-res-timers: http://sourceforge.net/projects/high-res-timers/
Preemption patch: http://www.kernel.org/pub/linux/kernel/people/rml

2004-09-16 15:51:02

by Henry Margies

[permalink] [raw]
Subject: Re: Is there a problem in timeval_to_jiffies?

Hello,

Thank you for your answers.

> You, I think, send a bug report. I replied via bugz. The open
> question is what value your particular arm platform is using
> for CLOCK_TICK_RATE. See below.

That is right, but I did not send the bug report, I just answered
to your reply. The requested values are:

HZ: 100
LATCH: 600000
USEC_ROUND: 4294967295
CLOCK_TICK_RATE: 60000000
TICK_NSEC: 10000000

> Timers are constrained by the standard to NEVER finish early.

That is why I wrote to this mailing list, to determine if it
is a bug or a feature :)

But, especially for my arm device, the timers seem to be more or
less accurate. They appear every 20ms with a average deviation of
less than 20ns (without any load of course). The only bad thing
is, that I requested timers for 10ms. I understand your
statement, that timers should not finish early, but for my case,
they just appear exactly 10ms late.

> This means that, in order to account for the timer starting
> between two jiffies, an extra jiffie needs to be added to the
> value. This will cause a timer to expire sometime between the
> value asked for and that value + the resolution.

In my case, that means, that most of my timers will appear at
least 9980ns too late. And! it is not possible to have 10ms
timers.

But if itimers have to act like this, the current implementation
is right. But anyway, on my board, I have to pay a high price
for that.

Just another comment, 2.4 kernels don't have this feature. So, is
there really a need to have this?


Best regards,
Henry

--

Hi! I'm a .signature virus! Copy me into your
~/.signature to help me spread!

2004-09-16 18:18:25

by Henry Margies

[permalink] [raw]
Subject: Re: Is there a problem in timeval_to_jiffies?

Hi,


On Thu, 16 Sep 2004 02:54:39 -0700
George Anzinger <[email protected]> wrote:

> Timers are constrained by the standard to NEVER finish early.

I just thought about that again and I think you are wrong.
Maybe your statement is true for one-shot timers, but not for
interval timers.

No interval timer can guarantee, that the time between to
triggers is always greater or equal to the time you programmed
it.

1 occurrence of a 1000ms timer,
10 occurrences of a 100ms timer and
100 occurrences of a 10ms timer should take the same time.

For example:

I want to have an interval timer for each second. Because of
some special reason the time between two triggers became 1.2
seconds.
The question is now, when do you want to have the next timer?

Your approach would trigger the timer in at least one second. But
that is not the behaviour of an interval timer. An interval timer
should trigger in 0.8 seconds because I wanted him to trigger
_every_ second.
If you want to have at least one second between your timers, you
have to use one-shot timers and restart them after each
occurrence.

And in fact, I think that no userspace program can ever take
advantage of your approach, because it can be interrupted
everytime, so there is no guarantee at all, that there will be at
least some fixed time between the very important commands. (for
interval timers)


So, what about adding this rounding value just to it_value to
guarantee that the first occurrence is in it least this time?


Best regards,

Henry

--

Hi! I'm a .signature virus! Copy me into your
~/.signature to help me spread!


2004-09-16 20:22:44

by George Anzinger

[permalink] [raw]
Subject: Re: Is there a problem in timeval_to_jiffies?

Henry Margies wrote:
> Hi,
>
>
> On Thu, 16 Sep 2004 02:54:39 -0700
> George Anzinger <[email protected]> wrote:
>
>
>>Timers are constrained by the standard to NEVER finish early.
>
>
> I just thought about that again and I think you are wrong.
> Maybe your statement is true for one-shot timers, but not for
> interval timers.
>
> No interval timer can guarantee, that the time between to
> triggers is always greater or equal to the time you programmed
> it.

This depends on how you interpret things. Strictly speaking you are right in
that a given timer signal can be delayed (latency things) while the next signal
is not so that that interval would appear short. However, the standard seems to
say that what you should measure is the expected arrival time (i.e. assume zero
latency). In this case the standard calls for timers NEVER to be early.
>
> 1 occurrence of a 1000ms timer,
> 10 occurrences of a 100ms timer and
> 100 occurrences of a 10ms timer should take the same time.

You are assuming NICE things about timers that just are not true. The problem
is resolution. The timer resolution is a function of what the hardware can
actually do. The system code attempts to make the resolution as close to 1/HZ
as possible, but this will not always be exact. In fact, the best that the x86
hardware can do with HZ=1000 is 999849 nanoseconds. Hence the result as per my
message.
>
> For example:
>
> I want to have an interval timer for each second. Because of
> some special reason the time between two triggers became 1.2
> seconds.
> The question is now, when do you want to have the next timer?

You are talking about latency here. The kernel and the standard do not account
for latency.
>
> Your approach would trigger the timer in at least one second. But
> that is not the behavior of an interval timer. An interval timer
> should trigger in 0.8 seconds because I wanted him to trigger
> _every_ second.

Yes, within the limits of the hardware imposed resolution.

> If you want to have at least one second between your timers, you
> have to use one-shot timers and restart them after each
> occurrence.
>
Yes.

> And in fact, I think that no userspace program can ever take
> advantage of your approach, because it can be interrupted
> every time, so there is no guarantee at all, that there will be at
> least some fixed time between the very important commands. (for
> interval timers)

Uh, my approach???
>
>
> So, what about adding this rounding value just to it_value to
> guarantee that the first occurrence is in it least this time?

The it_value and the it_interval are, indeed, computed differently. The
it_value needs to have 1 additional resolution size period added to it to
account for the initial time starting between ticks. The it_interval does not
have this additional period added to it. Both values, however, are first
rounded up to the next resolution size value.

--
George Anzinger [email protected]
High-res-timers: http://sourceforge.net/projects/high-res-timers/
Preemption patch: http://www.kernel.org/pub/linux/kernel/people/rml

2004-09-17 09:59:30

by Henry Margies

[permalink] [raw]
Subject: Re: Is there a problem in timeval_to_jiffies?


Ok, first of all I want to show you the output of my program
running on my arm device.

TIMER_INTERVAL =1000ms
COUNTER =1
expected elapsed time =1000ms
elapsed time =1010ms and 14ns

TIMER_INTERVAL =1000ms
COUNTER =1
expected elapsed time =1000ms
elapsed time =1009ms and 981ns

TIMER_INTERVAL =1000ms
COUNTER =1
expected elapsed time =1000ms
elapsed time =1010ms and 12ns

As you can see, it is always about 10ms late. The 14ns, -19ns and
12ns difference are because of latency.

TIMER_INTERVAL =100ms
COUNTER =10
expected elapsed time =1000ms
elapsed time =1100ms and 9ns

TIMER_INTERVAL =100ms
COUNTER =10
expected elapsed time =1000ms
elapsed time =1099ms and 994ns

TIMER_INTERVAL =100ms
COUNTER =10
expected elapsed time =1000ms
elapsed time =1100ms and 8ns

Much more interesting is the output for 10ms timers.

TIMER_INTERVAL =10ms
COUNTER =100
expected elapsed time =1000ms
elapsed time =2000ms and 0ns

TIMER_INTERVAL =10ms
COUNTER =100
expected elapsed time =1000ms
elapsed time =1999ms and 998ns

TIMER_INTERVAL =10ms
COUNTER =100
expected elapsed time =1000ms
elapsed time =2000ms and 3ns


Now, you can maybe see my problem. If I want to write a program
which should just send something every 10ms with the current 2.6
implementation, it will only send something every 20ms. I don't
care about the time between timers that much. But for 10ms
interval timers, I want to have 100 triggered timers within one
second.

The precision of timers can never be better than the size of
one jiffie. But with the old 2.4 solution the maximum deviation
is +/- 10ms, with your solution (the current 2.6 approach) it
is +20ms (for arm platform, where a jiffie size is 10ms).
The bad thing is, that the average deviation for 2.4 kernels is
0ms and for 2.6 kernels 10ms.

I see the problem for x86 architecture, where the size of one
jiffie is 999849ns. That means, that

jiffie: 0 s0ms 0ns
jiffie: 1 s0ms 999849ns
jiffie: 2 s1ms 999698ns
jiffie: 3 s2ms 999547ns
jiffie: 4 s3ms 999396ns
jiffie: 5 s4ms 999245ns
jiffie: 6 s5ms 999094ns
jiffie: 7 s6ms 998943ns
jiffie: 8 s7ms 998792ns
jiffie: 9 s8ms 998641ns
jiffie: 10 s9ms 998490ns

Right? But for arm, with a jiffie size of 10000000, it is much
more easier. And that is why I don't understand why an one second
interval is converted to 101 jiffies (on arm).


On Thu, 16 Sep 2004 13:19:58 -0700
George Anzinger <[email protected]> wrote:

> [...] However, the standard seems to
> say that what you should measure is the expected arrival time
> (i.e. assume zero latency). In this case the standard calls
> for timers NEVER to be early.

I agree. But then, why adding one jiffie to every interval? If
there is no latency, the timer should appear right at the
beginning of a jiffie. For x86 you are right, because 10 jiffies
are less then 10ms. But for arm, 1 jiffie is precisely 10ms.


> > So, what about adding this rounding value just to it_value to
> > guarantee that the first occurrence is in it least this time?
>
> The it_value and the it_interval are, indeed, computed
> differently. The it_value needs to have 1 additional
> resolution size period added to it to account for the initial
> time starting between ticks. The it_interval does not have
> this additional period added to it. Both values, however, are
> first rounded up to the next resolution size value.

Ok, I will have a closer look to the rounding. Maybe it is just
not working for arm.

Please, can you send me your test application?

Best regards,

Henry

--

Hi! I'm a .signature virus! Copy me into your
~/.signature to help me spread!

2004-09-29 20:56:36

by Tim Bird

[permalink] [raw]
Subject: Re: Is there a problem in timeval_to_jiffies?

Henry Margies wrote:
> Right? But for arm, with a jiffie size of 10000000, it is much
> more easier. And that is why I don't understand why an one second
> interval is converted to 101 jiffies (on arm).
...
> I agree. But then, why adding one jiffie to every interval? If
> there is no latency, the timer should appear right at the
> beginning of a jiffie. For x86 you are right, because 10 jiffies
> are less then 10ms. But for arm, 1 jiffie is precisely 10ms.

How does the computer "know" that the timer is at the beginning
of the jiffy? By definition, Linux (without HRT support) has
no way of dealing with sub-jiffy resolution for timers.

Maybe a graphic (ascii-graphic) will help:

tick 1 ---------------------




tick 2 ---------------------
schedule point A ->


schedule point B ->
tick 3 ---------------------




tick 4 ---------------------




tick 5 ---------------------


Let's say, that at point A, you ask for a 20 millisecond timer.
(2 jiffies, on ARM). You think you are asking for a timer to fire
on tick 4 (20 milliseconds after tick 2), but Linux can't
distinguish point A from point B. In order to avoid
the situation where someone scheduled a 20 millisecond timer
at point B, and had it fire on tick 4 (only 10 milliseconds
later), Linux adds one jiffy to the expiration time.
Both timers (set at point A or point B) would fire
on tick 5. For the A timer, this makes it 30 milliseconds
(or, jiffies plus one) later, which looks pretty bad.
For the B timer, the interval would be close to 20
milliseconds, which looks pretty good.

If you are rescheduling one-shot timers immediately
after they fire, you should 'undershoot' on the time
interval, to hit the tick boundary you want, based
on the jiffy resolution of your platform.

=============================
Tim Bird
Architecture Group Chair, CE Linux Forum
Senior Staff Engineer, Sony Electronics
=============================

2004-09-29 21:24:50

by Jon Masters

[permalink] [raw]
Subject: Re: Is there a problem in timeval_to_jiffies?

On Wed, 29 Sep 2004 13:56:24 -0700, Tim Bird <[email protected]> wrote:

Apologies for butting in.

> If you are rescheduling one-shot timers immediately
> after they fire, you should 'undershoot' on the time
> interval, to hit the tick boundary you want, based
> on the jiffy resolution of your platform.

Can I just do a ^^^^ here - this is what the original poster really
needs to know to solve the immediate problem of overshooting - then a
good book can help with the rest.

Jon.

2004-09-29 22:03:12

by Andi Kleen

[permalink] [raw]
Subject: Re: Is there a problem in timeval_to_jiffies?

Tim Bird <[email protected]> writes:

> Henry Margies wrote:
>> Right? But for arm, with a jiffie size of 10000000, it is much
>> more easier. And that is why I don't understand why an one second
>> interval is converted to 101 jiffies (on arm).
> ...
>> I agree. But then, why adding one jiffie to every interval? If
>> there is no latency, the timer should appear right at the
>> beginning of a jiffie. For x86 you are right, because 10 jiffies
>> are less then 10ms. But for arm, 1 jiffie is precisely 10ms.
>
> How does the computer "know" that the timer is at the beginning
> of the jiffy? By definition, Linux (without HRT support) has

do_gettimeofday() or better the posix monotonic clock normally have
much better resolution than a jiffie (on x86 typically the resolution
of the CPU clock) xtime is the T-O-D of the last jiffie
However calling do_gettimeofday and doing the calculation may
be too expensive.

-Andi

2004-10-01 11:47:04

by Henry Margies

[permalink] [raw]
Subject: Re: Is there a problem in timeval_to_jiffies?

Hi,

On Wed, 29 Sep 2004 13:56:24 -0700
Tim Bird <[email protected]> wrote:

> > If there is no latency, the timer should appear right at the
> > beginning of a jiffie
>
> How does the computer "know" that the timer is at the beginning
> of the jiffy?

I was assuming no latency and in that case the timer should be
managed right at the beginning of a jiffie. George Anzinger
pionted out that timers should be designed to never be early. And
for the design you have to assume there is no latency.

Another thing is that the calculation of jiffies for the first
occurrence of a timer is different to the interval calculation
(for the first case, one jiffie is always added). But
for interval timers it is different, it is normal that they take
sometimes less time than you expect, because they needed more
time for the last loop for example. But after 1000 loops the time
between should be near to 1000 * time_for_one_loop.

> If you are rescheduling one-shot timers immediately
> after they fire, you should 'undershoot' on the time
> interval, to hit the tick boundary you want, based
> on the jiffy resolution of your platform.

'undershooting' is not a good idea.

The current calculation of interval to jiffies works quit good
(I guess). But for arm there is this small problem. I'm still
waiting for the test application from George and in fact I also
have not that much time at the moment to work on that.




Henry


--

Hi! I'm a .signature virus! Copy me into your
~/.signature to help me spread!