2005-01-15 02:21:21

by David Mosberger

[permalink] [raw]
Subject: sparse warning, or why does jifies_to_msecs() return an int?

I'm seeing the following warning from sparse:

include/linux/jiffies.h:262:9: warning: cast truncates bits from constant value (3ffffffffffffffe becomes fffffffe)

it took me a while to realize that this is due to
the jiffies_to_msecs(MAX_JIFFY_OFFSET) call in
msecs_to_jiffies() and due to the fact that
jiffies_to_msecs() returns only an "unsigned int".

Is there are a good reason to constrain the return value to 4 billion
msecs? If so, what's the proper way to shut up sparse?

On a related note, there seem to be some overflow issues in
jiffies_to_{msec,usec}. For example:

return (j * 1000) / HZ;

can overflow if j > MAXULONG/1000, which is the case for
MAX_JIFFY_OFFSET.

I think it would be better to use:

return 1000*(j/HZ) + 1000*(j%HZ)/HZ;

instead. No?

--david


2005-01-15 04:29:47

by Linus Torvalds

[permalink] [raw]
Subject: Re: sparse warning, or why does jifies_to_msecs() return an int?



On Fri, 14 Jan 2005, David Mosberger wrote:
>
> I'm seeing the following warning from sparse:
>
> include/linux/jiffies.h:262:9: warning: cast truncates bits from constant value (3ffffffffffffffe becomes fffffffe)

Indeed. It happens on any 64-bit architecture, I think.

> it took me a while to realize that this is due to
> the jiffies_to_msecs(MAX_JIFFY_OFFSET) call in
> msecs_to_jiffies() and due to the fact that
> jiffies_to_msecs() returns only an "unsigned int".
>
> Is there are a good reason to constrain the return value to 4 billion
> msecs? If so, what's the proper way to shut up sparse?

There's no good way to shut up sparse, I think. The fact is, we _are_
losing bits, but it doesn't matter much in this case. I think
"jiffies_to_msecs(MAX_JIFFY_OFFSET)" is fundamentally a suspect operation
(since the ranges are different for the two types), and I think that the
sparse warnign is correct, but it's one of those "doing the wrong thing is
not always wrogn enough to matter".

> On a related note, there seem to be some overflow issues in
> jiffies_to_{msec,usec}. For example:
>
> return (j * 1000) / HZ;
>
> can overflow if j > MAXULONG/1000, which is the case for
> MAX_JIFFY_OFFSET.

Right. Same kind of situation.

> I think it would be better to use:
>
> return 1000*(j/HZ) + 1000*(j%HZ)/HZ;
>
> instead. No?

I don't see it making a huge difference. Whatever you do will be wrong for
some value of HZ anyway. If HZ is 10, and j > MAXULONG/10, then...

The issue is the same: jiffies and msecs have different ranges, so the
"fix" to some degree would be the same: making MAX_JIFFY_OFFSET small
enough. But as with the other case, it doesn't much seem to matter - it
turns out that the overflow cases end up being "very large integers"
anyway, which is good enough, since that's what all those MAX_xxx things
are all about.

In the meantime, a warning might eventually make somebody decide to do
something intelligent that just makes it all go away (most likely
something like avoiding the conversion in the first place, and use
something like MAX_MSECS instead)

Linus

2005-01-15 05:57:47

by David Mosberger

[permalink] [raw]
Subject: Re: sparse warning, or why does jifies_to_msecs() return an int?

>>>>> On Fri, 14 Jan 2005 20:29:07 -0800 (PST), Linus Torvalds <[email protected]> said:

>> Is there are a good reason to constrain the return value to 4
>> billion msecs? If so, what's the proper way to shut up sparse?

Linus> There's no good way to shut up sparse, I think. The fact is,
Linus> we _are_ losing bits, but it doesn't matter much in this
Linus> case.

Yes, you're right.

Linus> I think "jiffies_to_msecs(MAX_JIFFY_OFFSET)" is fundamentally
Linus> a suspect operation (since the ranges are different for the
Linus> two types), and I think that the sparse warnign is correct,
Linus> but it's one of those "doing the wrong thing is not always
Linus> wrogn enough to matter".

How about something along the lines of the attached? The test in
msecs_to_jiffies is non-sensical for HZ>=1000 because

(a) jiffies_to_msecs(x) <= x, if HZ >= 1000, and
(b) MAX_UINT <= MAX_ULONG

--david

===== include/linux/jiffies.h 1.11 vs edited =====
--- 1.11/include/linux/jiffies.h 2005-01-04 18:48:02 -08:00
+++ edited/include/linux/jiffies.h 2005-01-14 21:52:46 -08:00
@@ -276,8 +276,10 @@

static inline unsigned long msecs_to_jiffies(const unsigned int m)
{
+#if HZ < 1000
if (m > jiffies_to_msecs(MAX_JIFFY_OFFSET))
return MAX_JIFFY_OFFSET;
+#endif
#if HZ <= 1000 && !(1000 % HZ)
return (m + (1000 / HZ) - 1) / (1000 / HZ);
#elif HZ > 1000 && !(HZ % 1000)

2005-01-15 18:06:29

by Linus Torvalds

[permalink] [raw]
Subject: Re: sparse warning, or why does jifies_to_msecs() return an int?



On Fri, 14 Jan 2005, David Mosberger wrote:
>
> How about something along the lines of the attached? The test in
> msecs_to_jiffies is non-sensical for HZ>=1000

Hmm.. I don't think your patch is wrong per se, but I do think it's a bit
too subtle. I'd almost rather make "jiffies_to_msecs()" just test for
overflow instead, and that should also fix it.

Linus

2005-01-20 23:48:04

by David Mosberger

[permalink] [raw]
Subject: Re: sparse warning, or why does jifies_to_msecs() return an int?

>>>>> On Sat, 15 Jan 2005 10:05:37 -0800 (PST), Linus Torvalds <[email protected]> said:

Linus> Hmm.. I don't think your patch is wrong per se, but I do
Linus> think it's a bit too subtle. I'd almost rather make
Linus> "jiffies_to_msecs()" just test for overflow instead, and that
Linus> should also fix it.

You sure about that?

Actually, I think my patch was broken anyhow for HZ < 1000 because you
can potentially get integer-overflows in temporary results which could
make things come out wrong again.

I _think_ the attached patch works for all reasonable cases reasonably
uniformly, but if you thought the previous patch was subtle, I'm sure
you going to like this one even less.

Note that with the patch, platforms where HZ is not a power of two and
doesn't fit any of the other special cases (namely (HZ % 1000) != 0 &&
(1000 % HZ) != 0) would suffer a penalty. AFAICS, this is true only
for Alpha/Rawhide (HZ=1200). In such a case, rather than:

(j * 1000)/HZ

the new code would compute:

(j/HZ)*1000 + ((j%HZ)*1000)/HZ

It looks to me like we could get rid of all the ugly & complex
intermediate overflow-checks if we defined MAX_JIFFY_OFFSET
as:
(~0UL / 1000)

However, on a 32-bit platform that runs at 1000 Hz, this would limit
us to 4294 seconds. That may be cutting it a bit close.

--david

===== include/linux/jiffies.h 1.11 vs edited =====
--- 1.11/include/linux/jiffies.h 2005-01-04 18:48:02 -08:00
+++ edited/include/linux/jiffies.h 2005-01-20 15:21:14 -08:00
@@ -254,13 +254,32 @@
*/
static inline unsigned int jiffies_to_msecs(const unsigned long j)
{
+ unsigned long res;
+
#if HZ <= 1000 && !(1000 % HZ)
- return (1000 / HZ) * j;
+ unsigned long max = ~0UL / (1000 / HZ);
+
+ if (j > max)
+ max = j;
+ res = (1000 / HZ) * j;
#elif HZ > 1000 && !(HZ % 1000)
- return (j + (HZ / 1000) - 1)/(HZ / 1000);
+ res = (j + (HZ / 1000) - 1) / (HZ / 1000);
#else
- return (j * 1000) / HZ;
+ /*
+ * HZ better be a power of two; otherwise this gets real
+ * expensive. Better expensive than wrong, though.
+ */
+# if HZ < 1000
+ unsigned long max = (~0UL / 1000) * HZ;
+
+ if (j > max)
+ j = max;
+# endif
+ res = (j / HZ) * 1000 + ((j % HZ) * 1000) / HZ;
#endif
+ if (res > ~0U)
+ return ~0U;
+ return res;
}

static inline unsigned int jiffies_to_usecs(const unsigned long j)

2005-05-03 18:44:06

by Nish Aravamudan

[permalink] [raw]
Subject: Re: sparse warning, or why does jifies_to_msecs() return an int?

On 1/14/05, David Mosberger <[email protected]> wrote:
> I'm seeing the following warning from sparse:
>
> include/linux/jiffies.h:262:9: warning: cast truncates bits from constant value (3ffffffffffffffe becomes fffffffe)
>
> it took me a while to realize that this is due to
> the jiffies_to_msecs(MAX_JIFFY_OFFSET) call in
> msecs_to_jiffies() and due to the fact that
> jiffies_to_msecs() returns only an "unsigned int".
>
> Is there are a good reason to constrain the return value to 4 billion
> msecs? If so, what's the proper way to shut up sparse?

Is there any logical reason to need longer than 46 days of
milliseconds? I mean, sure, we could support years in milliseconds,
but why :) ? If you need longer, specify in seconds. Or add an
interface which does days :) I think it's perfectly reasonable to
constrain time units representative storage in the following manner:

seconds: unsigned int
milliseconds: unsigned int
microseconds: unsigned long
nanoseconds: u64

These are the assumptions I have made in my timeofday-based soft-timer
rework, for what it's worth.

I will look into all the math, though, as all the conversions need to
be accurate for my rework (to support existing interfaces).

Thanks,
Nish