2020-10-21 12:49:24

by Peter Zijlstra

[permalink] [raw]
Subject: GCC section alignment, and GCC-4.9 being a weird one

On Tue, Oct 20, 2020 at 04:39:38PM -0700, Florian Fainelli wrote:
> This patch causes all files under kernel/sched/* that include sched.h to
> be rebuilt whenever the value of CONFIG_BLK_DEV_INITRD. There are at
> least two build systems (buildroot and OpenWrt) that toggle this
> configuration value in order to produce a kernel image without an
> initramfs, and one with.
>
> On ARM we get all of these to be needlessly rebuilt:

Is it really ARM specific? AFAICT this should happen on everything.

> Short of moving the STRUCT_ALIGNMENT to a separate header that would not
> be subject to any configuration key change, can you think of a good way
> to avoid these rebuilds, including for architectures like ARM that ship
> their own vmlinux.lds.h? I would not say this is a bug, but it is
> definitively an inconvenience.

Well, no :/ I barely made it work in the first place. This linker cruft
is not my forte. GCC-4.9 being 'special' here is just weird in any case.

We can ask our friends on linux-toolchains; maybe they'll have a clue.

Guys, the problem is the below commit which, for dubious raisins makes
kernel/sched/sched.h depend on asm-generic/vmlinux.lds.h and triggers
rebuilds whenever a CONFIG mentioned in asm-generic/vmlinux.lds.h
changes.

Is there an explanation for why GCC-4.9 is weird and is there a better
way to find the appropriate value?


---
commit 85c2ce9104eb93517db2037699471c517e81f9b4
Author: Peter Zijlstra <[email protected]>
Date: Tue Jun 30 16:49:05 2020 +0200

sched, vmlinux.lds: Increase STRUCT_ALIGNMENT to 64 bytes for GCC-4.9

For some mysterious reason GCC-4.9 has a 64 byte section alignment for
structures, all other GCC versions (and Clang) tested (including 4.8
and 5.0) are fine with the 32 bytes alignment.

Getting this right is important for the new SCHED_DATA macro that
creates an explicitly ordered array of 'struct sched_class' in the
linker script and expect pointer arithmetic to work.

Fixes: c3a340f7e7ea ("sched: Have sched_class_highest define by vmlinux.lds.h")
Reported-by: kernel test robot <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]

diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index 66fb84c3dc7e..3ceb4b7279ec 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -108,6 +108,17 @@
#define SBSS_MAIN .sbss
#endif

+/*
+ * GCC 4.5 and later have a 32 bytes section alignment for structures.
+ * Except GCC 4.9, that feels the need to align on 64 bytes.
+ */
+#if __GNUC__ == 4 && __GNUC_MINOR__ == 9
+#define STRUCT_ALIGNMENT 64
+#else
+#define STRUCT_ALIGNMENT 32
+#endif
+#define STRUCT_ALIGN() . = ALIGN(STRUCT_ALIGNMENT)
+
/*
* The order of the sched class addresses are important, as they are
* used to determine the order of the priority of each sched class in
@@ -123,13 +134,6 @@
*(__stop_sched_class) \
__end_sched_classes = .;

-/*
- * Align to a 32 byte boundary equal to the
- * alignment gcc 4.5 uses for a struct
- */
-#define STRUCT_ALIGNMENT 32
-#define STRUCT_ALIGN() . = ALIGN(STRUCT_ALIGNMENT)
-
/* The actual configuration determine if the init/exit sections
* are handled as text/data or they can be discarded (which
* often happens at runtime)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 5aa6661ecaf1..9bef2dd01247 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -67,6 +67,7 @@
#include <linux/tsacct_kern.h>

#include <asm/tlb.h>
+#include <asm-generic/vmlinux.lds.h>

#ifdef CONFIG_PARAVIRT
# include <asm/paravirt.h>
@@ -1810,7 +1811,7 @@ struct sched_class {
#ifdef CONFIG_FAIR_GROUP_SCHED
void (*task_change_group)(struct task_struct *p, int type);
#endif
-} __aligned(32); /* STRUCT_ALIGN(), vmlinux.lds.h */
+} __aligned(STRUCT_ALIGNMENT); /* STRUCT_ALIGN(), vmlinux.lds.h */

static inline void put_prev_task(struct rq *rq, struct task_struct *prev)
{



2020-10-21 14:28:34

by Jakub Jelinek

[permalink] [raw]
Subject: Re: GCC section alignment, and GCC-4.9 being a weird one

On Wed, Oct 21, 2020 at 10:00:31AM +0200, Peter Zijlstra wrote:
> On Tue, Oct 20, 2020 at 04:39:38PM -0700, Florian Fainelli wrote:
> > This patch causes all files under kernel/sched/* that include sched.h to
> > be rebuilt whenever the value of CONFIG_BLK_DEV_INITRD. There are at
> > least two build systems (buildroot and OpenWrt) that toggle this
> > configuration value in order to produce a kernel image without an
> > initramfs, and one with.
> >
> > On ARM we get all of these to be needlessly rebuilt:
>
> Is it really ARM specific? AFAICT this should happen on everything.
>
> > Short of moving the STRUCT_ALIGNMENT to a separate header that would not
> > be subject to any configuration key change, can you think of a good way
> > to avoid these rebuilds, including for architectures like ARM that ship
> > their own vmlinux.lds.h? I would not say this is a bug, but it is
> > definitively an inconvenience.
>
> Well, no :/ I barely made it work in the first place. This linker cruft
> is not my forte. GCC-4.9 being 'special' here is just weird in any case.
>
> We can ask our friends on linux-toolchains; maybe they'll have a clue.
>
> Guys, the problem is the below commit which, for dubious raisins makes
> kernel/sched/sched.h depend on asm-generic/vmlinux.lds.h and triggers
> rebuilds whenever a CONFIG mentioned in asm-generic/vmlinux.lds.h
> changes.
>
> Is there an explanation for why GCC-4.9 is weird and is there a better
> way to find the appropriate value?

I have only looked at x86.
If you are talking about:
struct sched_class {
#define A(n) void (*n) (void);
#define B(n) A(n##0) A(n##1) A(n##2) A(n##3) A(n##4)
B(fn1) B(fn2) B(fn3) B(fn4)
A(fn50) A(fn51) A(fn52) A(fn53)
} __attribute__((aligned(32)));

struct sched_class a;

having the the variable aligned to 64 bytes rather than 32 bytes, it
started with
https://gcc.gnu.org/r0-127405-gd3c2fee09607e7d70cc7e69822638fab2bda6c7b
and disappeared with
https://gcc.gnu.org/r5-5887-g239711f6afe3070a11c4f1d9266588e8db1217ee

As the aligned attribute is just on the type and not on the actual
variables, I don't consider it a bug, GCC only guarantees it when
the variable itself has user specified alignment.
Otherwise the compiler can choose to align variables more if they
are large for performance reasons (to help vectorization e.g. when
the variable is copied around). Plus the ABI sometimes requires
bigger alignment too (e.g. on x86_64 the psABI
says that array variables larger than 15 bytes must have alignment at least
16 bytes).

So, if you tweak the above testcase to have either
struct sched_class a __attribute__((aligned(32)));
or even remove it also from the sched_class struct, you should get
the behavior you want even from GCC 4.9 and other GCC versions.

And, if e.g. 32-byte alignment is not what you really need, but e.g.
natural alignment of the struct sched_class would be sufficient, you could
use __attribute__((aligned (alignof (struct sched_class)))) on the variables
instead (perhaps wrapped in some macro).

BTW, the 32 vs. 64 vs. whatever else byte alignment is heavily architecture
and GCC version dependent, it is not that on all arches larger structures
will be always 32 byte aligned no matter what.

> Fixes: c3a340f7e7ea ("sched: Have sched_class_highest define by vmlinux.lds.h")
> Reported-by: kernel test robot <[email protected]>
> Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> Link: https://lkml.kernel.org/r/[email protected]
>
> diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
> index 66fb84c3dc7e..3ceb4b7279ec 100644
> --- a/include/asm-generic/vmlinux.lds.h
> +++ b/include/asm-generic/vmlinux.lds.h
> @@ -108,6 +108,17 @@
> #define SBSS_MAIN .sbss
> #endif
>
> +/*
> + * GCC 4.5 and later have a 32 bytes section alignment for structures.
> + * Except GCC 4.9, that feels the need to align on 64 bytes.
> + */
> +#if __GNUC__ == 4 && __GNUC_MINOR__ == 9
> +#define STRUCT_ALIGNMENT 64
> +#else
> +#define STRUCT_ALIGNMENT 32
> +#endif
> +#define STRUCT_ALIGN() . = ALIGN(STRUCT_ALIGNMENT)
> +
> /*
> * The order of the sched class addresses are important, as they are
> * used to determine the order of the priority of each sched class in
> @@ -123,13 +134,6 @@
> *(__stop_sched_class) \
> __end_sched_classes = .;
>
> -/*
> - * Align to a 32 byte boundary equal to the
> - * alignment gcc 4.5 uses for a struct
> - */
> -#define STRUCT_ALIGNMENT 32
> -#define STRUCT_ALIGN() . = ALIGN(STRUCT_ALIGNMENT)
> -
> /* The actual configuration determine if the init/exit sections
> * are handled as text/data or they can be discarded (which
> * often happens at runtime)
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 5aa6661ecaf1..9bef2dd01247 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -67,6 +67,7 @@
> #include <linux/tsacct_kern.h>
>
> #include <asm/tlb.h>
> +#include <asm-generic/vmlinux.lds.h>
>
> #ifdef CONFIG_PARAVIRT
> # include <asm/paravirt.h>
> @@ -1810,7 +1811,7 @@ struct sched_class {
> #ifdef CONFIG_FAIR_GROUP_SCHED
> void (*task_change_group)(struct task_struct *p, int type);
> #endif
> -} __aligned(32); /* STRUCT_ALIGN(), vmlinux.lds.h */
> +} __aligned(STRUCT_ALIGNMENT); /* STRUCT_ALIGN(), vmlinux.lds.h */
>
> static inline void put_prev_task(struct rq *rq, struct task_struct *prev)
> {
>

Jakub

2020-10-22 01:34:22

by Peter Zijlstra

[permalink] [raw]
Subject: Re: GCC section alignment, and GCC-4.9 being a weird one

On Wed, Oct 21, 2020 at 03:18:06PM +0200, Jakub Jelinek wrote:

> As the aligned attribute is just on the type and not on the actual
> variables, I don't consider it a bug, GCC only guarantees it when
> the variable itself has user specified alignment.
> Otherwise the compiler can choose to align variables more if they
> are large for performance reasons (to help vectorization e.g. when
> the variable is copied around). Plus the ABI sometimes requires
> bigger alignment too (e.g. on x86_64 the psABI
> says that array variables larger than 15 bytes must have alignment at least
> 16 bytes).
>
> So, if you tweak the above testcase to have either
> struct sched_class a __attribute__((aligned(32)));
> or even remove it also from the sched_class struct, you should get
> the behavior you want even from GCC 4.9 and other GCC versions.
>
> And, if e.g. 32-byte alignment is not what you really need, but e.g.
> natural alignment of the struct sched_class would be sufficient, you could
> use __attribute__((aligned (alignof (struct sched_class)))) on the variables
> instead (perhaps wrapped in some macro).
>
> BTW, the 32 vs. 64 vs. whatever else byte alignment is heavily architecture
> and GCC version dependent, it is not that on all arches larger structures
> will be always 32 byte aligned no matter what.


Ah, thanks!

In that case something like the below ought to make it good.

I'll go feed it to the robots, see if anything falls over.

---
kernel/sched/deadline.c | 4 +++-
kernel/sched/fair.c | 4 +++-
kernel/sched/idle.c | 4 +++-
kernel/sched/rt.c | 4 +++-
kernel/sched/sched.h | 3 +--
kernel/sched/stop_task.c | 3 ++-
6 files changed, 15 insertions(+), 7 deletions(-)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 6d93f4518734..f4855203143d 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -2504,7 +2504,9 @@ static void prio_changed_dl(struct rq *rq, struct task_struct *p,
}

const struct sched_class dl_sched_class
- __attribute__((section("__dl_sched_class"))) = {
+ __attribute__((section("__dl_sched_class")))
+ __attribute__((aligned(__alignof__(struct sched_class)))) = {
+
.enqueue_task = enqueue_task_dl,
.dequeue_task = dequeue_task_dl,
.yield_task = yield_task_dl,
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index aa4c6227cd6d..9bfa9f545b9a 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -11159,7 +11159,9 @@ static unsigned int get_rr_interval_fair(struct rq *rq, struct task_struct *task
* All the scheduling class methods:
*/
const struct sched_class fair_sched_class
- __attribute__((section("__fair_sched_class"))) = {
+ __attribute__((section("__fair_sched_class")))
+ __attribute__((aligned(__alignof__(struct sched_class)))) = {
+
.enqueue_task = enqueue_task_fair,
.dequeue_task = dequeue_task_fair,
.yield_task = yield_task_fair,
diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index f324dc36fc43..c74ded2cabd2 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -458,7 +458,9 @@ static void update_curr_idle(struct rq *rq)
* Simple, special scheduling class for the per-CPU idle tasks:
*/
const struct sched_class idle_sched_class
- __attribute__((section("__idle_sched_class"))) = {
+ __attribute__((section("__idle_sched_class")))
+ __attribute__((aligned(__alignof__(struct sched_class)))) = {
+
/* no enqueue/yield_task for idle tasks */

/* dequeue is not valid, we print a debug message there: */
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index f215eea6a966..002cdbfa5308 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -2430,7 +2430,9 @@ static unsigned int get_rr_interval_rt(struct rq *rq, struct task_struct *task)
}

const struct sched_class rt_sched_class
- __attribute__((section("__rt_sched_class"))) = {
+ __attribute__((section("__rt_sched_class")))
+ __attribute__((aligned(__alignof__(struct sched_class)))) = {
+
.enqueue_task = enqueue_task_rt,
.dequeue_task = dequeue_task_rt,
.yield_task = yield_task_rt,
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 28709f6b0975..42cf1e0cbf16 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -67,7 +67,6 @@
#include <linux/tsacct_kern.h>

#include <asm/tlb.h>
-#include <asm-generic/vmlinux.lds.h>

#ifdef CONFIG_PARAVIRT
# include <asm/paravirt.h>
@@ -1826,7 +1825,7 @@ struct sched_class {
#ifdef CONFIG_FAIR_GROUP_SCHED
void (*task_change_group)(struct task_struct *p, int type);
#endif
-} __aligned(STRUCT_ALIGNMENT); /* STRUCT_ALIGN(), vmlinux.lds.h */
+};

static inline void put_prev_task(struct rq *rq, struct task_struct *prev)
{
diff --git a/kernel/sched/stop_task.c b/kernel/sched/stop_task.c
index 394bc8126a1e..7bac6e0a9212 100644
--- a/kernel/sched/stop_task.c
+++ b/kernel/sched/stop_task.c
@@ -110,7 +110,8 @@ static void update_curr_stop(struct rq *rq)
* Simple, special scheduling class for the per-CPU stop tasks:
*/
const struct sched_class stop_sched_class
- __attribute__((section("__stop_sched_class"))) = {
+ __attribute__((section("__stop_sched_class")))
+ __attribute__((aligned(__alignof__(struct sched_class)))) = {

.enqueue_task = enqueue_task_stop,
.dequeue_task = dequeue_task_stop,

2020-10-22 06:31:05

by Miguel Ojeda

[permalink] [raw]
Subject: Re: GCC section alignment, and GCC-4.9 being a weird one

On Wed, Oct 21, 2020 at 7:42 PM Nick Desaulniers
<[email protected]> wrote:
>
> If you used some of the macros from
> include/linux/compiler_attributes.h like __section and __aligned, I
> would prefer it. Please consider spelling out __attribute__(()) an
> antipattern.

+1, the shorthands should be used unless there is a reason not to (and
please write the reason in a comment in that case).

Cheers,
Miguel

2020-10-22 06:34:29

by Joe Perches

[permalink] [raw]
Subject: Re: GCC section alignment, and GCC-4.9 being a weird one

(adding cc's of kernel-mentees and a few checkpatch contributors)

On Wed, 2020-10-21 at 19:54 +0200, Miguel Ojeda wrote:
> On Wed, Oct 21, 2020 at 7:42 PM Nick Desaulniers
> <[email protected]> wrote:
> > If you used some of the macros from
> > include/linux/compiler_attributes.h like __section and __aligned, I
> > would prefer it. Please consider spelling out __attribute__(()) an
> > antipattern.
>
> +1, the shorthands should be used unless there is a reason not to (and
> please write the reason in a comment in that case).

Perhaps something to add as a generic test in checkpatch.

Right now it checks separately for each __attribute__
use with any of aligned, section, printf and scanf.

Likely it should include more of the #defines from
include/linux/compiler_attributes.h.

include/linux/compiler_attributes.h:#define __alias(symbol) __attribute__((__alias__(#symbol)))
include/linux/compiler_attributes.h:#define __aligned(x) __attribute__((__aligned__(x)))
include/linux/compiler_attributes.h:#define __aligned_largest __attribute__((__aligned__))
include/linux/compiler_attributes.h:#define __always_inline inline __attribute__((__always_inline__))
include/linux/compiler_attributes.h:# define __assume_aligned(a, ...) __attribute__((__assume_aligned__(a, ## __VA_ARGS__)))
include/linux/compiler_attributes.h:#define __cold __attribute__((__cold__))
include/linux/compiler_attributes.h:#define __attribute_const__ __attribute__((__const__))
include/linux/compiler_attributes.h:# define __copy(symbol) __attribute__((__copy__(symbol)))
include/linux/compiler_attributes.h:# define __designated_init __attribute__((__designated_init__))
include/linux/compiler_attributes.h:# define __visible __attribute__((__externally_visible__))
include/linux/compiler_attributes.h:#define __printf(a, b) __attribute__((__format__(printf, a, b)))
include/linux/compiler_attributes.h:#define __scanf(a, b) __attribute__((__format__(scanf, a, b)))
include/linux/compiler_attributes.h:#define __gnu_inline __attribute__((__gnu_inline__))
include/linux/compiler_attributes.h:#define __malloc __attribute__((__malloc__))
include/linux/compiler_attributes.h:#define __mode(x) __attribute__((__mode__(x)))
include/linux/compiler_attributes.h:# define __no_caller_saved_registers __attribute__((__no_caller_saved_registers__))
include/linux/compiler_attributes.h:# define __noclone __attribute__((__noclone__))
include/linux/compiler_attributes.h:# define fallthrough __attribute__((__fallthrough__))
include/linux/compiler_attributes.h:#define noinline __attribute__((__noinline__))
include/linux/compiler_attributes.h:# define __nonstring __attribute__((__nonstring__))
include/linux/compiler_attributes.h:#define __noreturn __attribute__((__noreturn__))
include/linux/compiler_attributes.h:#define __packed __attribute__((__packed__))
include/linux/compiler_attributes.h:#define __pure __attribute__((__pure__))
include/linux/compiler_attributes.h:#define __section(S) __attribute__((__section__(#S)))
include/linux/compiler_attributes.h:#define __always_unused __attribute__((__unused__))
include/linux/compiler_attributes.h:#define __maybe_unused __attribute__((__unused__))
include/linux/compiler_attributes.h:#define __used __attribute__((__used__))
include/linux/compiler_attributes.h:#define __weak __attribute__((__weak__))


2020-10-22 06:42:25

by Miguel Ojeda

[permalink] [raw]
Subject: Re: GCC section alignment, and GCC-4.9 being a weird one

On Wed, Oct 21, 2020 at 8:35 PM Joe Perches <[email protected]> wrote:
>
> Perhaps something to add as a generic test in checkpatch.

Agreed! It would be nice to check all of them. Even checking for
`attribute` and `__attribute__` could be potentially reasonable (i.e.
so that people are reminded to consider adding whatever attributes
they need into `compiler_attributes.h`), although perhaps too
annoying/noisy for some (e.g. for `arch/*` devs...).

Cheers,
Miguel

2020-10-22 08:53:30

by Peter Zijlstra

[permalink] [raw]
Subject: Re: GCC section alignment, and GCC-4.9 being a weird one

On Wed, Oct 21, 2020 at 10:42:01AM -0700, Nick Desaulniers wrote:
> On Wed, Oct 21, 2020 at 6:45 AM Peter Zijlstra <[email protected]> wrote:
> >
> > Ah, thanks!
> >
> > In that case something like the below ought to make it good.
> >
> > I'll go feed it to the robots, see if anything falls over.
> >
> > ---
> > kernel/sched/deadline.c | 4 +++-
> > kernel/sched/fair.c | 4 +++-
> > kernel/sched/idle.c | 4 +++-
> > kernel/sched/rt.c | 4 +++-
> > kernel/sched/sched.h | 3 +--
> > kernel/sched/stop_task.c | 3 ++-
> > 6 files changed, 15 insertions(+), 7 deletions(-)
> >
> > diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> > index 6d93f4518734..f4855203143d 100644
> > --- a/kernel/sched/deadline.c
> > +++ b/kernel/sched/deadline.c
> > @@ -2504,7 +2504,9 @@ static void prio_changed_dl(struct rq *rq, struct task_struct *p,
> > }
> >
> > const struct sched_class dl_sched_class
> > - __attribute__((section("__dl_sched_class"))) = {
> > + __attribute__((section("__dl_sched_class")))
> > + __attribute__((aligned(__alignof__(struct sched_class)))) = {
>
> If you used some of the macros from
> include/linux/compiler_attributes.h like __section and __aligned, I
> would prefer it. Please consider spelling out __attribute__(()) an
> antipattern.

Feh, and then suffer more patches because someone doesn't like how
__section uses # :/

2020-10-22 12:02:57

by Nick Desaulniers

[permalink] [raw]
Subject: Re: GCC section alignment, and GCC-4.9 being a weird one

On Wed, Oct 21, 2020 at 6:45 AM Peter Zijlstra <[email protected]> wrote:
>
> Ah, thanks!
>
> In that case something like the below ought to make it good.
>
> I'll go feed it to the robots, see if anything falls over.
>
> ---
> kernel/sched/deadline.c | 4 +++-
> kernel/sched/fair.c | 4 +++-
> kernel/sched/idle.c | 4 +++-
> kernel/sched/rt.c | 4 +++-
> kernel/sched/sched.h | 3 +--
> kernel/sched/stop_task.c | 3 ++-
> 6 files changed, 15 insertions(+), 7 deletions(-)
>
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index 6d93f4518734..f4855203143d 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -2504,7 +2504,9 @@ static void prio_changed_dl(struct rq *rq, struct task_struct *p,
> }
>
> const struct sched_class dl_sched_class
> - __attribute__((section("__dl_sched_class"))) = {
> + __attribute__((section("__dl_sched_class")))
> + __attribute__((aligned(__alignof__(struct sched_class)))) = {

If you used some of the macros from
include/linux/compiler_attributes.h like __section and __aligned, I
would prefer it. Please consider spelling out __attribute__(()) an
antipattern.

> +
> .enqueue_task = enqueue_task_dl,
> .dequeue_task = dequeue_task_dl,
> .yield_task = yield_task_dl,
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index aa4c6227cd6d..9bfa9f545b9a 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -11159,7 +11159,9 @@ static unsigned int get_rr_interval_fair(struct rq *rq, struct task_struct *task
> * All the scheduling class methods:
> */
> const struct sched_class fair_sched_class
> - __attribute__((section("__fair_sched_class"))) = {
> + __attribute__((section("__fair_sched_class")))
> + __attribute__((aligned(__alignof__(struct sched_class)))) = {
> +
> .enqueue_task = enqueue_task_fair,
> .dequeue_task = dequeue_task_fair,
> .yield_task = yield_task_fair,
> diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
> index f324dc36fc43..c74ded2cabd2 100644
> --- a/kernel/sched/idle.c
> +++ b/kernel/sched/idle.c
> @@ -458,7 +458,9 @@ static void update_curr_idle(struct rq *rq)
> * Simple, special scheduling class for the per-CPU idle tasks:
> */
> const struct sched_class idle_sched_class
> - __attribute__((section("__idle_sched_class"))) = {
> + __attribute__((section("__idle_sched_class")))
> + __attribute__((aligned(__alignof__(struct sched_class)))) = {
> +
> /* no enqueue/yield_task for idle tasks */
>
> /* dequeue is not valid, we print a debug message there: */
> diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
> index f215eea6a966..002cdbfa5308 100644
> --- a/kernel/sched/rt.c
> +++ b/kernel/sched/rt.c
> @@ -2430,7 +2430,9 @@ static unsigned int get_rr_interval_rt(struct rq *rq, struct task_struct *task)
> }
>
> const struct sched_class rt_sched_class
> - __attribute__((section("__rt_sched_class"))) = {
> + __attribute__((section("__rt_sched_class")))
> + __attribute__((aligned(__alignof__(struct sched_class)))) = {
> +
> .enqueue_task = enqueue_task_rt,
> .dequeue_task = dequeue_task_rt,
> .yield_task = yield_task_rt,
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 28709f6b0975..42cf1e0cbf16 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -67,7 +67,6 @@
> #include <linux/tsacct_kern.h>
>
> #include <asm/tlb.h>
> -#include <asm-generic/vmlinux.lds.h>
>
> #ifdef CONFIG_PARAVIRT
> # include <asm/paravirt.h>
> @@ -1826,7 +1825,7 @@ struct sched_class {
> #ifdef CONFIG_FAIR_GROUP_SCHED
> void (*task_change_group)(struct task_struct *p, int type);
> #endif
> -} __aligned(STRUCT_ALIGNMENT); /* STRUCT_ALIGN(), vmlinux.lds.h */
> +};
>
> static inline void put_prev_task(struct rq *rq, struct task_struct *prev)
> {
> diff --git a/kernel/sched/stop_task.c b/kernel/sched/stop_task.c
> index 394bc8126a1e..7bac6e0a9212 100644
> --- a/kernel/sched/stop_task.c
> +++ b/kernel/sched/stop_task.c
> @@ -110,7 +110,8 @@ static void update_curr_stop(struct rq *rq)
> * Simple, special scheduling class for the per-CPU stop tasks:
> */
> const struct sched_class stop_sched_class
> - __attribute__((section("__stop_sched_class"))) = {
> + __attribute__((section("__stop_sched_class")))
> + __attribute__((aligned(__alignof__(struct sched_class)))) = {
>
> .enqueue_task = enqueue_task_stop,
> .dequeue_task = dequeue_task_stop,



--
Thanks,
~Nick Desaulniers