The return type of functions _inb, _inw and _inl are all u16 which looks
wrong. This patch makes them u8, u16 and u32 respectively.
The original commit text for these does not indicate that these should
be all forced to u16.
Fixes: f009c89df79a ("io: Provide _inX() and _outX()")
Signed-off-by: Stafford Horne <[email protected]>
---
include/asm-generic/io.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/asm-generic/io.h b/include/asm-generic/io.h
index 8b1e020e9a03..30a3aab312e6 100644
--- a/include/asm-generic/io.h
+++ b/include/asm-generic/io.h
@@ -456,7 +456,7 @@ static inline void writesq(volatile void __iomem *addr, const void *buffer,
#if !defined(inb) && !defined(_inb)
#define _inb _inb
-static inline u16 _inb(unsigned long addr)
+static inline u8 _inb(unsigned long addr)
{
u8 val;
@@ -482,7 +482,7 @@ static inline u16 _inw(unsigned long addr)
#if !defined(inl) && !defined(_inl)
#define _inl _inl
-static inline u16 _inl(unsigned long addr)
+static inline u32 _inl(unsigned long addr)
{
u32 val;
--
2.26.2
On Sun, Jul 26, 2020 at 6:14 AM Stafford Horne <[email protected]> wrote:
>
> The return type of functions _inb, _inw and _inl are all u16 which looks
> wrong. This patch makes them u8, u16 and u32 respectively.
>
> The original commit text for these does not indicate that these should
> be all forced to u16.
Is it in alight with all architectures? that support this interface natively?
(Return value is arch-dependent AFAIU, so it might actually return
16-bit for byte read, but I agree that this is weird for 32-bit value.
I think you have elaborate more in the commit message)
--
With Best Regards,
Andy Shevchenko
On Sun, Jul 26, 2020 at 12:00:37PM +0300, Andy Shevchenko wrote:
> On Sun, Jul 26, 2020 at 6:14 AM Stafford Horne <[email protected]> wrote:
> >
> > The return type of functions _inb, _inw and _inl are all u16 which looks
> > wrong. This patch makes them u8, u16 and u32 respectively.
> >
> > The original commit text for these does not indicate that these should
> > be all forced to u16.
>
> Is it in alight with all architectures? that support this interface natively?
>
> (Return value is arch-dependent AFAIU, so it might actually return
> 16-bit for byte read, but I agree that this is weird for 32-bit value.
> I think you have elaborate more in the commit message)
Well, this is the generic io code, at least these api's appear to not be different
for each architecture. The output read by the architecture dependant code i.e.
__raw_readb() below is getting is placed into a u8. So I think the output of
the function will be u8.
static inline u8 _inb(unsigned long addr)
{
u8 val;
__io_pbr();
val = __raw_readb(PCI_IOBASE + addr);
__io_par(val);
return val;
}
I can expand the commit text, but I would like to get some comments from the
original author to confirm if this is an issue.
-Stafford
On Sun, Jul 26, 2020 at 2:53 PM Stafford Horne <[email protected]> wrote:
>
> On Sun, Jul 26, 2020 at 12:00:37PM +0300, Andy Shevchenko wrote:
> > On Sun, Jul 26, 2020 at 6:14 AM Stafford Horne <[email protected]> wrote:
> > >
> > > The return type of functions _inb, _inw and _inl are all u16 which looks
> > > wrong. This patch makes them u8, u16 and u32 respectively.
> > >
> > > The original commit text for these does not indicate that these should
> > > be all forced to u16.
> >
> > Is it in alight with all architectures? that support this interface natively?
> >
> > (Return value is arch-dependent AFAIU, so it might actually return
> > 16-bit for byte read, but I agree that this is weird for 32-bit value.
> > I think you have elaborate more in the commit message)
>
> Well, this is the generic io code, at least these api's appear to not be different
> for each architecture. The output read by the architecture dependant code i.e.
> __raw_readb() below is getting is placed into a u8. So I think the output of
> the function will be u8.
>
> static inline u8 _inb(unsigned long addr)
> {
> u8 val;
>
> __io_pbr();
> val = __raw_readb(PCI_IOBASE + addr);
> __io_par(val);
> return val;
> }
>
> I can expand the commit text, but I would like to get some comments from the
> original author to confirm if this is an issue.
I think your original version is fine, this was clearly just a typo and I've
applied your fix now and will forward it to Linus in the next few days,
giving John the chance to add his Ack or further comments.
Thanks a lot for spotting it and sending a fix.
Arnd
On 27/07/2020 09:04, Arnd Bergmann wrote:> On Sun, Jul 26, 2020 at 2:53
PM Stafford Horne <[email protected]> wrote:
>>
>> On Sun, Jul 26, 2020 at 12:00:37PM +0300, Andy Shevchenko wrote:
>>> On Sun, Jul 26, 2020 at 6:14 AM Stafford Horne <[email protected]> wrote:
>>>>
>>>> The return type of functions _inb, _inw and _inl are all u16 which looks
>>>> wrong. This patch makes them u8, u16 and u32 respectively.
>>>>
>>>> The original commit text for these does not indicate that these should
>>>> be all forced to u16.
>>>
>>> Is it in alight with all architectures? that support this interface natively?
>>>
>>> (Return value is arch-dependent AFAIU, so it might actually return
>>> 16-bit for byte read, but I agree that this is weird for 32-bit value.
>>> I think you have elaborate more in the commit message)
>>
>> Well, this is the generic io code, at least these api's appear to not be different
>> for each architecture. The output read by the architecture dependant code i.e.
>> __raw_readb() below is getting is placed into a u8. So I think the output of
>> the function will be u8.
>>
>> static inline u8 _inb(unsigned long addr)
>> {
>> u8 val;
>>
>> __io_pbr();
>> val = __raw_readb(PCI_IOBASE + addr);
>> __io_par(val);
>> return val;
>> }
>>
>> I can expand the commit text, but I would like to get some comments from the
>> original author to confirm if this is an issue.
>
> I think your original version is fine, this was clearly just a typo and I've
> applied your fix now and will forward it to Linus in the next few days,
> giving John the chance to add his Ack or further comments.
>
> Thanks a lot for spotting it and sending a fix.
Thanks Arnd.
Yeah, these looks like copy+paste errors on my part:
Reviewed-by: John Garry <[email protected]>
I'll give this patch a spin, but not expecting any differences (since
original seems ok).
Note that kbuild robot also reported this:
https://lore.kernel.org/lkml/202007140549.J7X9BVPT%[email protected]/
Extract:
include/asm-generic/io.h:521:22: sparse: sparse: incorrect type in
argument 1 (different base types) @@ expected unsigned int
[usertype] value @@ got restricted __le32 [usertype] @@
include/asm-generic/io.h:521:22: sparse: expected unsigned int
[usertype] value
include/asm-generic/io.h:521:22: sparse: got restricted __le32
[usertype]
But they look like issues which were in the existing code. I tried to
recreate to verify any change, but trying to manually upgrade glibc
busted my machine :(
Thanks,
John
On Mon, Jul 27, 2020 at 10:30 AM John Garry <[email protected]> wrote:
> On 27/07/2020 09:04, Arnd Bergmann wrote:> On Sun, Jul 26, 2020 at 2:53
> PM Stafford Horne <[email protected]> wrote:
> >>
> >> On Sun, Jul 26, 2020 at 12:00:37PM +0300, Andy Shevchenko wrote:
> >>> On Sun, Jul 26, 2020 at 6:14 AM Stafford Horne <[email protected]> wrote:
> >>>>
> >>>> The return type of functions _inb, _inw and _inl are all u16 which looks
> >>>> wrong. This patch makes them u8, u16 and u32 respectively.
> >>>>
> >>>> The original commit text for these does not indicate that these should
> >>>> be all forced to u16.
> >>>
> >>> Is it in alight with all architectures? that support this interface natively?
> >>>
> >>> (Return value is arch-dependent AFAIU, so it might actually return
> >>> 16-bit for byte read, but I agree that this is weird for 32-bit value.
> >>> I think you have elaborate more in the commit message)
> >>
> >> Well, this is the generic io code, at least these api's appear to not be different
> >> for each architecture. The output read by the architecture dependant code i.e.
> >> __raw_readb() below is getting is placed into a u8. So I think the output of
> >> the function will be u8.
> >>
> >> static inline u8 _inb(unsigned long addr)
> >> {
> >> u8 val;
> >>
> >> __io_pbr();
> >> val = __raw_readb(PCI_IOBASE + addr);
> >> __io_par(val);
> >> return val;
> >> }
> >>
> >> I can expand the commit text, but I would like to get some comments from the
> >> original author to confirm if this is an issue.
> >
> > I think your original version is fine, this was clearly just a typo and I've
> > applied your fix now and will forward it to Linus in the next few days,
> > giving John the chance to add his Ack or further comments.
> >
> > Thanks a lot for spotting it and sending a fix.
>
> Thanks Arnd.
>
> Yeah, these looks like copy+paste errors on my part:
>
> Reviewed-by: John Garry <[email protected]>
Thanks!
>
> I'll give this patch a spin, but not expecting any differences (since
> original seems ok).
>
> Note that kbuild robot also reported this:
> https://lore.kernel.org/lkml/202007140549.J7X9BVPT%[email protected]/
>
> Extract:
>
> include/asm-generic/io.h:521:22: sparse: sparse: incorrect type in
> argument 1 (different base types) @@ expected unsigned int
> [usertype] value @@ got restricted __le32 [usertype] @@
> include/asm-generic/io.h:521:22: sparse: expected unsigned int
> [usertype] value
> include/asm-generic/io.h:521:22: sparse: got restricted __le32
> [usertype]
>
> But they look like issues which were in the existing code.
Yes, this driver code (atm/ambassador.c) seems to have been broken that
way since it was merged in 1999.
Arnd