2008-03-20 17:34:29

by Harvey Harrison

[permalink] [raw]
Subject: [RFC PATCH] kernel: add byteorder macros with alignment fixups

A common pattern in the kernel (especially networking) is:

le32_to_cpu(get_unaligned((__le32 *)x));

Repeat for various combinations of le/be and 64/32/16 bit. Add
a variant that operates on possibly unaligned pointers to
byteorder/generic.h

Signed-off-by: Harvey Harrison <[email protected]>
---
include/linux/byteorder/generic.h | 16 ++++++++++++++++
1 files changed, 16 insertions(+), 0 deletions(-)

diff --git a/include/linux/byteorder/generic.h b/include/linux/byteorder/generic.h
index d377155..9b1a7a4 100644
--- a/include/linux/byteorder/generic.h
+++ b/include/linux/byteorder/generic.h
@@ -73,6 +73,10 @@
* cpu_to_[bl]eXXp(__uXX x)
* [bl]eXX_to_cpup(__uXX x)
*
+ * The same, but takes a possibly unaligned pointer to the value to convert
+ * cpu_to_[bl]eXXp_un(__uXX x)
+ * [bl]eXX_to_cpup_un(__uXX x)
+ *
* The same, but change in situ
* cpu_to_[bl]eXXs(__uXX x)
* [bl]eXX_to_cpus(__uXX x)
@@ -84,6 +88,8 @@


#if defined(__KERNEL__)
+#include <asm/unaligned.h>
+
/*
* inside the kernel, we can use nicknames;
* outside of it, we must avoid POSIX namespace pollution...
@@ -126,6 +132,16 @@
#define be16_to_cpus __be16_to_cpus

/*
+ * Operates on possibly unaligned pointers
+ */
+#define le64_to_cpup_un(x) __le64_to_cpu(get_unaligned((__le64 *)(x)))
+#define le32_to_cpup_un(x) __le32_to_cpu(get_unaligned((__le32 *)(x)))
+#define le16_to_cpup_un(x) __le16_to_cpu(get_unaligned((__le16 *)(x)))
+#define be64_to_cpup_un(x) __be64_to_cpu(get_unaligned((__be64 *)(x)))
+#define be32_to_cpup_un(x) __be32_to_cpu(get_unaligned((__be32 *)(x)))
+#define be16_to_cpup_un(x) __be16_to_cpu(get_unaligned((__be16 *)(x)))
+
+/*
* They have to be macros in order to do the constant folding
* correctly - if the argument passed into a inline function
* it is no longer constant according to gcc..
--
1.5.4.4.684.g0e08



2008-03-20 18:29:25

by Al Viro

[permalink] [raw]
Subject: Re: [RFC PATCH] kernel: add byteorder macros with alignment fixups

On Thu, Mar 20, 2008 at 10:34:14AM -0700, Harvey Harrison wrote:
> A common pattern in the kernel (especially networking) is:
>
> le32_to_cpu(get_unaligned((__le32 *)x));
>
> Repeat for various combinations of le/be and 64/32/16 bit. Add
> a variant that operates on possibly unaligned pointers to
> byteorder/generic.h

... and asm/unaligned.h has just acquired fuckloads of places including
it indirectly. Not Nice(tm).

2008-03-20 18:37:43

by Harvey Harrison

[permalink] [raw]
Subject: Re: [RFC PATCH] kernel: add byteorder macros with alignment fixups

On Thu, 2008-03-20 at 18:29 +0000, Al Viro wrote:
> On Thu, Mar 20, 2008 at 10:34:14AM -0700, Harvey Harrison wrote:
> > A common pattern in the kernel (especially networking) is:
> >
> > le32_to_cpu(get_unaligned((__le32 *)x));
> >
> > Repeat for various combinations of le/be and 64/32/16 bit. Add
> > a variant that operates on possibly unaligned pointers to
> > byteorder/generic.h
>
> ... and asm/unaligned.h has just acquired fuckloads of places including
> it indirectly. Not Nice(tm).

Time for linux/unaligned.h?

Do you think the helpers are worth it...wherever they end up?

Harvey

2008-03-20 19:10:11

by Al Viro

[permalink] [raw]
Subject: Re: [RFC PATCH] kernel: add byteorder macros with alignment fixups

On Thu, Mar 20, 2008 at 11:37:24AM -0700, Harvey Harrison wrote:
> On Thu, 2008-03-20 at 18:29 +0000, Al Viro wrote:
> > On Thu, Mar 20, 2008 at 10:34:14AM -0700, Harvey Harrison wrote:
> > > A common pattern in the kernel (especially networking) is:
> > >
> > > le32_to_cpu(get_unaligned((__le32 *)x));
> > >
> > > Repeat for various combinations of le/be and 64/32/16 bit. Add
> > > a variant that operates on possibly unaligned pointers to
> > > byteorder/generic.h
> >
> > ... and asm/unaligned.h has just acquired fuckloads of places including
> > it indirectly. Not Nice(tm).
>
> Time for linux/unaligned.h?

Er... And just how would that improve things?

2008-03-20 19:22:53

by Harvey Harrison

[permalink] [raw]
Subject: Re: [RFC PATCH] kernel: add byteorder macros with alignment fixups

Create linux/unaligned.h to hold a common pattern in the kernel:

le32_to_cpu(get_unaligned((__le32 *)x));

Repeat for various combinations of le/be and 64/32/16 bit. Add
a variant that operates on possibly unaligned pointers to
byteorder/generic.h

Signed-off-by: Harvey Harrison <[email protected]>
---
Now the indirect include of asm/unaligned is opt-in when places
add the linux/unaligned header.

include/linux/unaligned.h | 42 ++++++++++++++++++++++++++++++++++++++++++
1 files changed, 42 insertions(+), 0 deletions(-)

diff --git a/include/linux/unaligned.h b/include/linux/unaligned.h
new file mode 100644
index 0000000..7d8fddc
--- /dev/null
+++ b/include/linux/unaligned.h
@@ -0,0 +1,42 @@
+#ifndef _LINUX_UNALIGNED_H_
+#define _LINUX_UNALIGNED_H_
+
+#include <linux/types.h>
+#include <asm/byteorder.h>
+#include <asm/unaligned.h>
+
+#ifdef __KERNEL__
+
+static inline u64 le64_to_cpu_unaligned(void *p)
+{
+ return __le64_to_cpu(get_unaligned((__le64 *)p));
+}
+
+static inline u32 le32_to_cpu_unaligned(void *p)
+{
+ return __le32_to_cpu(get_unaligned((__le32 *)p));
+}
+
+static inline u16 le16_to_cpu_unaligned(void *p)
+{
+ return __le16_to_cpu(get_unaligned((__le16 *)p));
+}
+
+static inline u64 be64_to_cpu_unaligned(void *p)
+{
+ return __be64_to_cpu(get_unaligned((__be64 *)p));
+}
+
+static inline u32 be32_to_cpu_unaligned(void *p)
+{
+ return __be32_to_cpu(get_unaligned((__be32 *)p));
+}
+
+static inline u16 be16_to_cpu_unaligned(void *p)
+{
+ return __be16_to_cpu(get_unaligned((__be16 *)p));
+}
+
+#endif /* KERNEL */
+
+#endif /* _LINUX_UNALIGNED_H */
--
1.5.4.4.684.g0e08


2008-03-20 19:41:44

by Harvey Harrison

[permalink] [raw]
Subject: [PATCH] kernel: add byteorder function with alignment fixups

Add helpers for the pattern put_unaligned(cpu_to_le32(val), (__le32 *)p)
to linux/unaligned.h

Repeat for various combinations of le/be and 64/32/16 bit.

Signed-off-by: Harvey Harrison <[email protected]>
---
Applies on top of my last patch, makes it a symmetric API.

include/linux/unaligned.h | 30 ++++++++++++++++++++++++++++++
1 files changed, 30 insertions(+), 0 deletions(-)

diff --git a/include/linux/unaligned.h b/include/linux/unaligned.h
index 7d8fddc..c9aa286 100644
--- a/include/linux/unaligned.h
+++ b/include/linux/unaligned.h
@@ -37,6 +37,36 @@ static inline u16 be16_to_cpu_unaligned(void *p)
return __be16_to_cpu(get_unaligned((__be16 *)p));
}

+static inline void cpu_to_le64_unaligned(u64 val, void *p)
+{
+ put_unaligned(cpu_to_le64(val), (__le64 *)p);
+}
+
+static inline void cpu_to_le32_unaligned(u32 val, void *p)
+{
+ put_unaligned(cpu_to_le32(val), (__le32 *)p);
+}
+
+static inline void cpu_to_le16_unaligned(u16 val, void *p)
+{
+ put_unaligned(cpu_to_le16(val), (__le16 *)p);
+}
+
+static inline void cpu_to_be64_unaligned(u64 val, void *p)
+{
+ put_unaligned(cpu_to_be64(val), (__be64 *)p);
+}
+
+static inline void cpu_to_be32_unaligned(u32 val, void *p)
+{
+ put_unaligned(cpu_to_be32(val), (__be32 *)p);
+}
+
+static inline void cpu_to_be16_unaligned(u16 val, void *p)
+{
+ put_unaligned(cpu_to_be16(val), (__be16 *)p);
+}
+
#endif /* KERNEL */

#endif /* _LINUX_UNALIGNED_H */
--
1.5.4.4.684.g0e08


2008-03-20 20:08:21

by H. Peter Anvin

[permalink] [raw]
Subject: Re: [RFC PATCH] kernel: add byteorder macros with alignment fixups

Harvey Harrison wrote:
> A common pattern in the kernel (especially networking) is:
>
> le32_to_cpu(get_unaligned((__le32 *)x));
>
> Repeat for various combinations of le/be and 64/32/16 bit. Add
> a variant that operates on possibly unaligned pointers to
> byteorder/generic.h
>

This should go in <asm-generic/*> since on some architectures this can
be done significantly faster than the chained pattern, since you can get
the reordering for "free". Thus, architectures should be able to
override the generic.

-hpa

2008-03-20 20:17:19

by Harvey Harrison

[permalink] [raw]
Subject: [PATCH] kernel: create linux/unaligned.h

Add helpers for the following two patterns currently found in the
kernel:

le32_to_cpu(get_unaligned((__le32 *)p));
put_unaligned(cpu_to_le32(x), (__le32 *)p);

Becomes:
le32_to_cpu_unaligned(p);
cpu_to_le32_unaligned(x, p);

There are also some hand-rolled functions implementing this
with byte-shifts that can be replaced. To avoid new indirect
includes of asm/unaligned.h, create linux/unaligned.h to make
it opt in for new code that wants to use these helpers.

Signed-off-by: Harvey Harrison <[email protected]>
---
This is a rollup of all the previous ones, added kerneldocs and
have CC'd Randy.

include/linux/unaligned.h | 138 +++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 138 insertions(+), 0 deletions(-)

diff --git a/include/linux/unaligned.h b/include/linux/unaligned.h
new file mode 100644
index 0000000..035c1c7
--- /dev/null
+++ b/include/linux/unaligned.h
@@ -0,0 +1,138 @@
+#ifndef _LINUX_UNALIGNED_H_
+#define _LINUX_UNALIGNED_H_
+
+#include <linux/types.h>
+#include <asm/byteorder.h>
+#include <asm/unaligned.h>
+
+#ifdef __KERNEL__
+
+/**
+ * le64_to_cpu_unaligned - read a le64 from a possibly unaligned pointer
+ * @p: pointer to read from
+ *
+ * Returns a u64 in cpu byteorder
+ */
+static inline u64 le64_to_cpu_unaligned(void *p)
+{
+ return le64_to_cpu(get_unaligned((__le64 *)p));
+}
+
+/**
+ * le32_to_cpu_unaligned - read a le32 from a possibly unaligned pointer
+ * @p: pointer to read from
+ *
+ * Returns a u32 in cpu byteorder
+ */
+static inline u32 le32_to_cpu_unaligned(void *p)
+{
+ return le32_to_cpu(get_unaligned((__le32 *)p));
+}
+
+/**
+ * le16_to_cpu_unaligned - read a le16 from a possibly unaligned pointer
+ * @p: pointer to read from
+ *
+ * Returns a u16 in cpu byteorder
+ */
+static inline u16 le16_to_cpu_unaligned(void *p)
+{
+ return le16_to_cpu(get_unaligned((__le16 *)p));
+}
+
+/**
+ * be64_to_cpu_unaligned - read a be64 from a possibly unaligned pointer
+ * @p: pointer to read from
+ *
+ * Returns a u64 in cpu byteorder
+ */
+static inline u64 be64_to_cpu_unaligned(void *p)
+{
+ return be64_to_cpu(get_unaligned((__be64 *)p));
+}
+
+/**
+ * be32_to_cpu_unaligned - read a be32 from a possibly unaligned pointer
+ * @p: pointer to read from
+ *
+ * Returns a u32 in cpu byteorder
+ */
+static inline u32 be32_to_cpu_unaligned(void *p)
+{
+ return be32_to_cpu(get_unaligned((__be32 *)p));
+}
+
+/**
+ * be16_to_cpu_unaligned - read a be16 from a possibly unaligned pointer
+ * @p: pointer to read from
+ *
+ * Returns a u16 in cpu byteorder
+ */
+static inline u16 be16_to_cpu_unaligned(void *p)
+{
+ return be16_to_cpu(get_unaligned((__be16 *)p));
+}
+
+/**
+ * le64_to_cpu_unaligned - write a u64 in le-byteorder to a possibly unaligned pointer
+ * @val: value to be written
+ * @p: pointer to write to
+ */
+static inline void cpu_to_le64_unaligned(u64 val, void *p)
+{
+ put_unaligned(cpu_to_le64(val), (__le64 *)p);
+}
+
+/**
+ * le32_to_cpu_unaligned - write a u32 in le-byteorder to a possibly unaligned pointer
+ * @val: value to be written
+ * @p: pointer to write to
+ */
+static inline void cpu_to_le32_unaligned(u32 val, void *p)
+{
+ put_unaligned(cpu_to_le32(val), (__le32 *)p);
+}
+
+/**
+ * le16_to_cpu_unaligned - write a u16 in le-byteorder to a possibly unaligned pointer
+ * @val: value to be written
+ * @p: pointer to write to
+ */
+static inline void cpu_to_le16_unaligned(u16 val, void *p)
+{
+ put_unaligned(cpu_to_le16(val), (__le16 *)p);
+}
+
+/**
+ * be64_to_cpu_unaligned - write a u64 in be-byteorder to a possibly unaligned pointer
+ * @val: value to be written
+ * @p: pointer to write to
+ */
+static inline void cpu_to_be64_unaligned(u64 val, void *p)
+{
+ put_unaligned(cpu_to_be64(val), (__be64 *)p);
+}
+
+/**
+ * be32_to_cpu_unaligned - write a u32 in be-byteorder to a possibly unaligned pointer
+ * @val: value to be written
+ * @p: pointer to write to
+ */
+static inline void cpu_to_be32_unaligned(u32 val, void *p)
+{
+ put_unaligned(cpu_to_be32(val), (__be32 *)p);
+}
+
+/**
+ * be16_to_cpu_unaligned - write a u16 in be-byteorder to a possibly unaligned pointer
+ * @val: value to be written
+ * @p: pointer to write to
+ */
+static inline void cpu_to_be16_unaligned(u16 val, void *p)
+{
+ put_unaligned(cpu_to_be16(val), (__be16 *)p);
+}
+
+#endif /* KERNEL */
+
+#endif /* _LINUX_UNALIGNED_H */
--
1.5.4.4.684.g0e08


2008-03-20 20:40:34

by Randy Dunlap

[permalink] [raw]
Subject: Re: [PATCH] kernel: create linux/unaligned.h

On Thu, 20 Mar 2008 13:16:57 -0700 Harvey Harrison wrote:

> Add helpers for the following two patterns currently found in the
> kernel:
>
> le32_to_cpu(get_unaligned((__le32 *)p));
> put_unaligned(cpu_to_le32(x), (__le32 *)p);
>
> Becomes:
> le32_to_cpu_unaligned(p);
> cpu_to_le32_unaligned(x, p);
>
> There are also some hand-rolled functions implementing this
> with byte-shifts that can be replaced. To avoid new indirect
> includes of asm/unaligned.h, create linux/unaligned.h to make
> it opt in for new code that wants to use these helpers.
>
> Signed-off-by: Harvey Harrison <[email protected]>
> ---
> This is a rollup of all the previous ones, added kerneldocs and
> have CC'd Randy.
>
> include/linux/unaligned.h | 138 +++++++++++++++++++++++++++++++++++++++++++++
> 1 files changed, 138 insertions(+), 0 deletions(-)
>
> diff --git a/include/linux/unaligned.h b/include/linux/unaligned.h
> new file mode 100644
> index 0000000..035c1c7
> --- /dev/null
> +++ b/include/linux/unaligned.h
> @@ -0,0 +1,138 @@
> +#ifndef _LINUX_UNALIGNED_H_
> +#define _LINUX_UNALIGNED_H_
> +
> +#include <linux/types.h>
> +#include <asm/byteorder.h>
> +#include <asm/unaligned.h>
> +
> +#ifdef __KERNEL__
> +
> +/**
> + * le64_to_cpu_unaligned - read a le64 from a possibly unaligned pointer

I would write these as "an le64" etc., but then I read them as
"l e 64".


> + * @p: pointer to read from
> + *
> + * Returns a u64 in cpu byteorder

And I would write "CPU", but some people don't do that. ;)

Acked-by: Randy Dunlap <[email protected]>


Thanks.

> + */
> +static inline u64 le64_to_cpu_unaligned(void *p)
> +{
> + return le64_to_cpu(get_unaligned((__le64 *)p));
> +}
> +
> +/**
> + * le32_to_cpu_unaligned - read a le32 from a possibly unaligned pointer
> + * @p: pointer to read from
> + *
> + * Returns a u32 in cpu byteorder
> + */
> +static inline u32 le32_to_cpu_unaligned(void *p)
> +{
> + return le32_to_cpu(get_unaligned((__le32 *)p));
> +}
> +
> +/**
> + * le16_to_cpu_unaligned - read a le16 from a possibly unaligned pointer
> + * @p: pointer to read from
> + *
> + * Returns a u16 in cpu byteorder
> + */
> +static inline u16 le16_to_cpu_unaligned(void *p)
> +{
> + return le16_to_cpu(get_unaligned((__le16 *)p));
> +}
> +
> +/**
> + * be64_to_cpu_unaligned - read a be64 from a possibly unaligned pointer
> + * @p: pointer to read from
> + *
> + * Returns a u64 in cpu byteorder
> + */
> +static inline u64 be64_to_cpu_unaligned(void *p)
> +{
> + return be64_to_cpu(get_unaligned((__be64 *)p));
> +}
> +
> +/**
> + * be32_to_cpu_unaligned - read a be32 from a possibly unaligned pointer
> + * @p: pointer to read from
> + *
> + * Returns a u32 in cpu byteorder
> + */
> +static inline u32 be32_to_cpu_unaligned(void *p)
> +{
> + return be32_to_cpu(get_unaligned((__be32 *)p));
> +}
> +
> +/**
> + * be16_to_cpu_unaligned - read a be16 from a possibly unaligned pointer
> + * @p: pointer to read from
> + *
> + * Returns a u16 in cpu byteorder
> + */
> +static inline u16 be16_to_cpu_unaligned(void *p)
> +{
> + return be16_to_cpu(get_unaligned((__be16 *)p));
> +}
> +
> +/**
> + * le64_to_cpu_unaligned - write a u64 in le-byteorder to a possibly unaligned pointer
> + * @val: value to be written
> + * @p: pointer to write to
> + */
> +static inline void cpu_to_le64_unaligned(u64 val, void *p)
> +{
> + put_unaligned(cpu_to_le64(val), (__le64 *)p);
> +}
> +
> +/**
> + * le32_to_cpu_unaligned - write a u32 in le-byteorder to a possibly unaligned pointer
> + * @val: value to be written
> + * @p: pointer to write to
> + */
> +static inline void cpu_to_le32_unaligned(u32 val, void *p)
> +{
> + put_unaligned(cpu_to_le32(val), (__le32 *)p);
> +}
> +
> +/**
> + * le16_to_cpu_unaligned - write a u16 in le-byteorder to a possibly unaligned pointer
> + * @val: value to be written
> + * @p: pointer to write to
> + */
> +static inline void cpu_to_le16_unaligned(u16 val, void *p)
> +{
> + put_unaligned(cpu_to_le16(val), (__le16 *)p);
> +}
> +
> +/**
> + * be64_to_cpu_unaligned - write a u64 in be-byteorder to a possibly unaligned pointer
> + * @val: value to be written
> + * @p: pointer to write to
> + */
> +static inline void cpu_to_be64_unaligned(u64 val, void *p)
> +{
> + put_unaligned(cpu_to_be64(val), (__be64 *)p);
> +}
> +
> +/**
> + * be32_to_cpu_unaligned - write a u32 in be-byteorder to a possibly unaligned pointer
> + * @val: value to be written
> + * @p: pointer to write to
> + */
> +static inline void cpu_to_be32_unaligned(u32 val, void *p)
> +{
> + put_unaligned(cpu_to_be32(val), (__be32 *)p);
> +}
> +
> +/**
> + * be16_to_cpu_unaligned - write a u16 in be-byteorder to a possibly unaligned pointer
> + * @val: value to be written
> + * @p: pointer to write to
> + */
> +static inline void cpu_to_be16_unaligned(u16 val, void *p)
> +{
> + put_unaligned(cpu_to_be16(val), (__be16 *)p);
> +}
> +
> +#endif /* KERNEL */
> +
> +#endif /* _LINUX_UNALIGNED_H */
> --

---
~Randy

2008-03-24 16:17:31

by Pavel Machek

[permalink] [raw]
Subject: Re: [RFC PATCH] kernel: add byteorder macros with alignment fixups

On Thu 2008-03-20 12:22:33, Harvey Harrison wrote:
> Create linux/unaligned.h to hold a common pattern in the kernel:
>
> le32_to_cpu(get_unaligned((__le32 *)x));
>
> Repeat for various combinations of le/be and 64/32/16 bit. Add
> a variant that operates on possibly unaligned pointers to
> byteorder/generic.h
>
> Signed-off-by: Harvey Harrison <[email protected]>
> ---
> Now the indirect include of asm/unaligned is opt-in when places
> add the linux/unaligned header.
>
> include/linux/unaligned.h | 42 ++++++++++++++++++++++++++++++++++++++++++
> 1 files changed, 42 insertions(+), 0 deletions(-)
>
> diff --git a/include/linux/unaligned.h b/include/linux/unaligned.h
> new file mode 100644
> index 0000000..7d8fddc
> --- /dev/null
> +++ b/include/linux/unaligned.h
> @@ -0,0 +1,42 @@
> +#ifndef _LINUX_UNALIGNED_H_
> +#define _LINUX_UNALIGNED_H_
> +
> +#include <linux/types.h>
> +#include <asm/byteorder.h>
> +#include <asm/unaligned.h>
> +
> +#ifdef __KERNEL__
> +
> +static inline u64 le64_to_cpu_unaligned(void *p)
> +{
> + return __le64_to_cpu(get_unaligned((__le64 *)p));
> +}

Why the cast? Should le64_to_cpu() take __le64 * parameter, so that normal
typechecking still works?
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

2008-03-24 16:35:59

by Harvey Harrison

[permalink] [raw]
Subject: Re: [RFC PATCH] kernel: add byteorder macros with alignment fixups

On Sun, 2008-03-23 at 14:59 +0100, Pavel Machek wrote:
> On Thu 2008-03-20 12:22:33, Harvey Harrison wrote:
> > +#ifdef __KERNEL__
> > +
> > +static inline u64 le64_to_cpu_unaligned(void *p)
> > +{
> > + return __le64_to_cpu(get_unaligned((__le64 *)p));
> > +}
>
> Why the cast? Should le64_to_cpu() take __le64 * parameter, so that normal
> typechecking still works?
> Pavel

Well, most places that would use this have a char *, or a u8 * so I was
avoiding a cast in most callers.

HPA made a good suggestion about this being in asm-generic allowing
arches to optimize this, so I'm reworking with that approach.

Harvey