2024-03-13 19:00:25

by Valentin Schneider

[permalink] [raw]
Subject: [PATCH v3 0/4] jump_label: Fix __ro_after_init keys for modules & annotate some keys

Hi folks,

This series fixes __ro_after_init keys used in modules (courtesy of PeterZ) and
flags more keys as __ro_after_init.

Compile & boot tested for x86_64_defconfig and i386_defconfig.

@Peter, regarding making __use_tsc x86_32-only, I hit a few snags:

Currently, for the static key to be enabled, we (mostly) need:
o X86_FEATURE_TSC is in CPUID
o determine_cpu_tsc_frequencies passes

All X86_64 systems have a TSC, so the CPUID feature is a given there.

Calibrating the TSC can end up depending on different things:
o CPUID accepting 0x16 as eax input (cf. cpu_khz_from_cpuid())
o MSR_FSB_FREQ being available (cf. cpu_khz_from_msr())
o pit_hpet_ptimer_calibrate_cpu() doesn't mess up

I couldn't find any guarantees for X86_64 on having the processor frequency
information CPUID leaf, nor for the FSB_FREQ MSR (both tsc_msr_cpu_ids and
the SDM seem to point at only a handful of models).

pit_hpet_ptimer_calibrate_cpu() relies on having either HPET or the ACPI PM
timer, the latter being widely available, though X86_PM_TIMER can be
disabled via EXPERT.

The question here is: are there any guarantees that at least one of these
can be relied upon for x86_64?

And with all of that, there is still the "apicpmtimer" cmdline option which
currently invokes notsc_setup() on x86_64. The justification I found for it was
in 0c3749c41f5e ("[PATCH] x86_64: Calibrate APIC timer using PM timer"):

"""
On some broken motherboards (at least one NForce3 based AMD64 laptop)
the PIT timer runs at a incorrect frequency. This patch adds a new
option "apicpmtimer" that allows to use the APIC timer and calibrate it
using the PMTimer.
"""


Revisions
=========

v2 -> v3
++++++++

o Rebased against latest upstream
o Fixed up jump_label_del_module() error in first patch reported by kernel test
robot
http://lore.kernel.org/r/[email protected]
o Removed mds_user_clear patch; key was switched to an ALTERNATIVE by
6613d82e617d ("x86/bugs: Use ALTERNATIVE() instead of mds_user_clear static key")


v1 -> v2
++++++++

o Collected tags (Josh, Sean)
o Fixed CONFIG_JUMP_LABEL=n compile fail (lkp)

Cheers,
Valentin

Peter Zijlstra (1):
jump_label,module: Don't alloc static_key_mod for __ro_after_init keys

Valentin Schneider (3):
context_tracking: Make context_tracking_key __ro_after_init
x86/kvm: Make kvm_async_pf_enabled __ro_after_init
x86/tsc: Make __use_tsc __ro_after_init

arch/x86/kernel/kvm.c | 2 +-
arch/x86/kernel/tsc.c | 2 +-
include/asm-generic/sections.h | 5 ++++
include/linux/jump_label.h | 3 ++
init/main.c | 1 +
kernel/context_tracking.c | 2 +-
kernel/jump_label.c | 53 ++++++++++++++++++++++++++++++++++
7 files changed, 65 insertions(+), 3 deletions(-)

--
2.43.0



2024-03-13 19:01:02

by Valentin Schneider

[permalink] [raw]
Subject: [PATCH v3 1/4] jump_label,module: Don't alloc static_key_mod for __ro_after_init keys

From: Peter Zijlstra <[email protected]>

When a static_key is marked ro_after_init, its state will never change
(after init), therefore jump_label_update() will never need to iterate
the entries, and thus module load won't actually need to track this --
avoiding the static_key::next write.

Therefore, mark these keys such that jump_label_add_module() might
recognise them and avoid the modification.

Use the special state: 'static_key_linked(key) && !static_key_mod(key)'
to denote such keys.

jump_label_add_module() does not exist under CONFIG_JUMP_LABEL=n, so the
newly-introduced jump_label_ro() can be defined as a nop for that
configuration.

Link: http://lore.kernel.org/r/[email protected]
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
[Added comments and build fix]
Signed-off-by: Valentin Schneider <[email protected]>
Acked-by: Josh Poimboeuf <[email protected]>
---
include/asm-generic/sections.h | 5 ++++
include/linux/jump_label.h | 3 ++
init/main.c | 1 +
kernel/jump_label.c | 53 ++++++++++++++++++++++++++++++++++
4 files changed, 62 insertions(+)

diff --git a/include/asm-generic/sections.h b/include/asm-generic/sections.h
index db13bb620f527..c768de6f19a9a 100644
--- a/include/asm-generic/sections.h
+++ b/include/asm-generic/sections.h
@@ -180,6 +180,11 @@ static inline bool is_kernel_rodata(unsigned long addr)
addr < (unsigned long)__end_rodata;
}

+static inline bool is_kernel_ro_after_init(unsigned long addr)
+{
+ return addr >= (unsigned long)__start_ro_after_init &&
+ addr < (unsigned long)__end_ro_after_init;
+}
/**
* is_kernel_inittext - checks if the pointer address is located in the
* .init.text section
diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h
index f0a949b7c9733..3b103d88c139e 100644
--- a/include/linux/jump_label.h
+++ b/include/linux/jump_label.h
@@ -216,6 +216,7 @@ extern struct jump_entry __start___jump_table[];
extern struct jump_entry __stop___jump_table[];

extern void jump_label_init(void);
+extern void jump_label_ro(void);
extern void jump_label_lock(void);
extern void jump_label_unlock(void);
extern void arch_jump_label_transform(struct jump_entry *entry,
@@ -265,6 +266,8 @@ static __always_inline void jump_label_init(void)
static_key_initialized = true;
}

+static __always_inline void jump_label_ro(void) { }
+
static __always_inline bool static_key_false(struct static_key *key)
{
if (unlikely_notrace(static_key_count(key) > 0))
diff --git a/init/main.c b/init/main.c
index 7dce08198b133..6fc421b4d5fdb 100644
--- a/init/main.c
+++ b/init/main.c
@@ -1412,6 +1412,7 @@ static void mark_readonly(void)
* insecure pages which are W+X.
*/
rcu_barrier();
+ jump_label_ro();
mark_rodata_ro();
rodata_test();
} else
diff --git a/kernel/jump_label.c b/kernel/jump_label.c
index d9c822bbffb8d..7e3e8d1a0fea7 100644
--- a/kernel/jump_label.c
+++ b/kernel/jump_label.c
@@ -530,6 +530,45 @@ void __init jump_label_init(void)
cpus_read_unlock();
}

+static inline bool static_key_sealed(struct static_key *key)
+{
+ return (key->type & JUMP_TYPE_LINKED) && !(key->type & ~JUMP_TYPE_MASK);
+}
+
+static inline void static_key_seal(struct static_key *key)
+{
+ unsigned long type = key->type & JUMP_TYPE_TRUE;
+ key->type = JUMP_TYPE_LINKED | type;
+}
+
+void jump_label_ro(void)
+{
+ struct jump_entry *iter_start = __start___jump_table;
+ struct jump_entry *iter_stop = __stop___jump_table;
+ struct jump_entry *iter;
+
+ if (WARN_ON_ONCE(!static_key_initialized))
+ return;
+
+ cpus_read_lock();
+ jump_label_lock();
+
+ for (iter = iter_start; iter < iter_stop; iter++) {
+ struct static_key *iterk = jump_entry_key(iter);
+
+ if (!is_kernel_ro_after_init((unsigned long)iterk))
+ continue;
+
+ if (static_key_sealed(iterk))
+ continue;
+
+ static_key_seal(iterk);
+ }
+
+ jump_label_unlock();
+ cpus_read_unlock();
+}
+
#ifdef CONFIG_MODULES

enum jump_label_type jump_label_init_type(struct jump_entry *entry)
@@ -650,6 +689,15 @@ static int jump_label_add_module(struct module *mod)
static_key_set_entries(key, iter);
continue;
}
+
+ /*
+ * If the key was sealed at init, then there's no need to keep a
+ * reference to its module entries - just patch them now and be
+ * done with it.
+ */
+ if (static_key_sealed(key))
+ goto do_poke;
+
jlm = kzalloc(sizeof(struct static_key_mod), GFP_KERNEL);
if (!jlm)
return -ENOMEM;
@@ -675,6 +723,7 @@ static int jump_label_add_module(struct module *mod)
static_key_set_linked(key);

/* Only update if we've changed from our initial state */
+do_poke:
if (jump_label_type(iter) != jump_label_init_type(iter))
__jump_label_update(key, iter, iter_stop, true);
}
@@ -699,6 +748,10 @@ static void jump_label_del_module(struct module *mod)
if (within_module((unsigned long)key, mod))
continue;

+ /* No @jlm allocated because key was sealed at init. */
+ if (static_key_sealed(key))
+ continue;
+
/* No memory during module load */
if (WARN_ON(!static_key_linked(key)))
continue;
--
2.43.0


2024-03-13 19:02:26

by Valentin Schneider

[permalink] [raw]
Subject: [PATCH v3 3/4] x86/kvm: Make kvm_async_pf_enabled __ro_after_init

kvm_async_pf_enabled is only ever enabled in __init kvm_guest_init(), so
mark it as __ro_after_init.

Reviewed-by: Sean Christopherson <[email protected]>
Signed-off-by: Valentin Schneider <[email protected]>
Acked-by: Josh Poimboeuf <[email protected]>
---
arch/x86/kernel/kvm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 101a7c1bf2008..6c6ff015b99fd 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -44,7 +44,7 @@
#include <asm/svm.h>
#include <asm/e820/api.h>

-DEFINE_STATIC_KEY_FALSE(kvm_async_pf_enabled);
+DEFINE_STATIC_KEY_FALSE_RO(kvm_async_pf_enabled);

static int kvmapf = 1;

--
2.43.0


2024-03-13 19:08:23

by Valentin Schneider

[permalink] [raw]
Subject: [PATCH v3 4/4] x86/tsc: Make __use_tsc __ro_after_init

__use_tsc is only ever enabled in __init tsc_enable_sched_clock(), so mark
it as __ro_after_init.

Signed-off-by: Valentin Schneider <[email protected]>
Acked-by: Josh Poimboeuf <[email protected]>
---
arch/x86/kernel/tsc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
index 5a69a49acc963..0f7624ed1d1d0 100644
--- a/arch/x86/kernel/tsc.c
+++ b/arch/x86/kernel/tsc.c
@@ -44,7 +44,7 @@ EXPORT_SYMBOL(tsc_khz);
static int __read_mostly tsc_unstable;
static unsigned int __initdata tsc_early_khz;

-static DEFINE_STATIC_KEY_FALSE(__use_tsc);
+static DEFINE_STATIC_KEY_FALSE_RO(__use_tsc);

int tsc_clocksource_reliable;

--
2.43.0


2024-03-13 19:12:30

by Valentin Schneider

[permalink] [raw]
Subject: [PATCH v3 2/4] context_tracking: Make context_tracking_key __ro_after_init

context_tracking_key is only ever enabled in __init ct_cpu_tracker_user(),
so mark it as __ro_after_init.

Signed-off-by: Valentin Schneider <[email protected]>
Acked-by: Josh Poimboeuf <[email protected]>
---
kernel/context_tracking.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c
index 70ae70d038233..24b1e11432608 100644
--- a/kernel/context_tracking.c
+++ b/kernel/context_tracking.c
@@ -432,7 +432,7 @@ static __always_inline void ct_kernel_enter(bool user, int offset) { }
#define CREATE_TRACE_POINTS
#include <trace/events/context_tracking.h>

-DEFINE_STATIC_KEY_FALSE(context_tracking_key);
+DEFINE_STATIC_KEY_FALSE_RO(context_tracking_key);
EXPORT_SYMBOL_GPL(context_tracking_key);

static noinstr bool context_tracking_recursion_enter(void)
--
2.43.0


2024-03-22 10:58:33

by tip-bot2 for Jacob Pan

[permalink] [raw]
Subject: [tip: locking/core] x86/tsc: Make __use_tsc __ro_after_init

The following commit has been merged into the locking/core branch of tip:

Commit-ID: 79a4567b2e8ae4d0282602a24f76f5e2382f5b98
Gitweb: https://git.kernel.org/tip/79a4567b2e8ae4d0282602a24f76f5e2382f5b98
Author: Valentin Schneider <[email protected]>
AuthorDate: Wed, 13 Mar 2024 19:01:06 +01:00
Committer: Ingo Molnar <[email protected]>
CommitterDate: Fri, 22 Mar 2024 11:18:20 +01:00

x86/tsc: Make __use_tsc __ro_after_init

__use_tsc is only ever enabled in __init tsc_enable_sched_clock(), so mark
it as __ro_after_init.

Signed-off-by: Valentin Schneider <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Acked-by: Josh Poimboeuf <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/x86/kernel/tsc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
index 5a69a49..0f7624e 100644
--- a/arch/x86/kernel/tsc.c
+++ b/arch/x86/kernel/tsc.c
@@ -44,7 +44,7 @@ EXPORT_SYMBOL(tsc_khz);
static int __read_mostly tsc_unstable;
static unsigned int __initdata tsc_early_khz;

-static DEFINE_STATIC_KEY_FALSE(__use_tsc);
+static DEFINE_STATIC_KEY_FALSE_RO(__use_tsc);

int tsc_clocksource_reliable;


2024-03-22 10:58:33

by tip-bot2 for Jacob Pan

[permalink] [raw]
Subject: [tip: locking/core] x86/kvm: Make kvm_async_pf_enabled __ro_after_init

The following commit has been merged into the locking/core branch of tip:

Commit-ID: ddd8afacc4f65a01204d7a36b8fd96c908e9b72c
Gitweb: https://git.kernel.org/tip/ddd8afacc4f65a01204d7a36b8fd96c908e9b72c
Author: Valentin Schneider <[email protected]>
AuthorDate: Wed, 13 Mar 2024 19:01:05 +01:00
Committer: Ingo Molnar <[email protected]>
CommitterDate: Fri, 22 Mar 2024 11:18:19 +01:00

x86/kvm: Make kvm_async_pf_enabled __ro_after_init

kvm_async_pf_enabled is only ever enabled in __init kvm_guest_init(), so
mark it as __ro_after_init.

Signed-off-by: Valentin Schneider <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Reviewed-by: Sean Christopherson <[email protected]>
Acked-by: Josh Poimboeuf <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/x86/kernel/kvm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 4cadfd6..31a48ba 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -44,7 +44,7 @@
#include <asm/svm.h>
#include <asm/e820/api.h>

-DEFINE_STATIC_KEY_FALSE(kvm_async_pf_enabled);
+DEFINE_STATIC_KEY_FALSE_RO(kvm_async_pf_enabled);

static int kvmapf = 1;


2024-03-22 10:59:06

by tip-bot2 for Jacob Pan

[permalink] [raw]
Subject: [tip: locking/core] jump_label,module: Don't alloc static_key_mod for __ro_after_init keys

The following commit has been merged into the locking/core branch of tip:

Commit-ID: 91a1d97ef482c1e4c9d4c1c656a53b0f6b16d0ed
Gitweb: https://git.kernel.org/tip/91a1d97ef482c1e4c9d4c1c656a53b0f6b16d0ed
Author: Peter Zijlstra <[email protected]>
AuthorDate: Wed, 13 Mar 2024 19:01:03 +01:00
Committer: Ingo Molnar <[email protected]>
CommitterDate: Fri, 22 Mar 2024 11:18:16 +01:00

jump_label,module: Don't alloc static_key_mod for __ro_after_init keys

When a static_key is marked ro_after_init, its state will never change
(after init), therefore jump_label_update() will never need to iterate
the entries, and thus module load won't actually need to track this --
avoiding the static_key::next write.

Therefore, mark these keys such that jump_label_add_module() might
recognise them and avoid the modification.

Use the special state: 'static_key_linked(key) && !static_key_mod(key)'
to denote such keys.

jump_label_add_module() does not exist under CONFIG_JUMP_LABEL=n, so the
newly-introduced jump_label_init_ro() can be defined as a nop for that
configuration.

[ mingo: Renamed jump_label_ro() to jump_label_init_ro() ]

Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Signed-off-by: Valentin Schneider <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Acked-by: Josh Poimboeuf <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
include/asm-generic/sections.h | 5 +++-
include/linux/jump_label.h | 3 ++-
init/main.c | 1 +-
kernel/jump_label.c | 53 +++++++++++++++++++++++++++++++++-
4 files changed, 62 insertions(+)

diff --git a/include/asm-generic/sections.h b/include/asm-generic/sections.h
index db13bb6..c768de6 100644
--- a/include/asm-generic/sections.h
+++ b/include/asm-generic/sections.h
@@ -180,6 +180,11 @@ static inline bool is_kernel_rodata(unsigned long addr)
addr < (unsigned long)__end_rodata;
}

+static inline bool is_kernel_ro_after_init(unsigned long addr)
+{
+ return addr >= (unsigned long)__start_ro_after_init &&
+ addr < (unsigned long)__end_ro_after_init;
+}
/**
* is_kernel_inittext - checks if the pointer address is located in the
* .init.text section
diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h
index f0a949b..f5a2727 100644
--- a/include/linux/jump_label.h
+++ b/include/linux/jump_label.h
@@ -216,6 +216,7 @@ extern struct jump_entry __start___jump_table[];
extern struct jump_entry __stop___jump_table[];

extern void jump_label_init(void);
+extern void jump_label_init_ro(void);
extern void jump_label_lock(void);
extern void jump_label_unlock(void);
extern void arch_jump_label_transform(struct jump_entry *entry,
@@ -265,6 +266,8 @@ static __always_inline void jump_label_init(void)
static_key_initialized = true;
}

+static __always_inline void jump_label_init_ro(void) { }
+
static __always_inline bool static_key_false(struct static_key *key)
{
if (unlikely_notrace(static_key_count(key) > 0))
diff --git a/init/main.c b/init/main.c
index 2ca5247..6c3f251 100644
--- a/init/main.c
+++ b/init/main.c
@@ -1408,6 +1408,7 @@ static void mark_readonly(void)
* insecure pages which are W+X.
*/
flush_module_init_free_work();
+ jump_label_init_ro();
mark_rodata_ro();
debug_checkwx();
rodata_test();
diff --git a/kernel/jump_label.c b/kernel/jump_label.c
index d9c822b..3218fa5 100644
--- a/kernel/jump_label.c
+++ b/kernel/jump_label.c
@@ -530,6 +530,45 @@ void __init jump_label_init(void)
cpus_read_unlock();
}

+static inline bool static_key_sealed(struct static_key *key)
+{
+ return (key->type & JUMP_TYPE_LINKED) && !(key->type & ~JUMP_TYPE_MASK);
+}
+
+static inline void static_key_seal(struct static_key *key)
+{
+ unsigned long type = key->type & JUMP_TYPE_TRUE;
+ key->type = JUMP_TYPE_LINKED | type;
+}
+
+void jump_label_init_ro(void)
+{
+ struct jump_entry *iter_start = __start___jump_table;
+ struct jump_entry *iter_stop = __stop___jump_table;
+ struct jump_entry *iter;
+
+ if (WARN_ON_ONCE(!static_key_initialized))
+ return;
+
+ cpus_read_lock();
+ jump_label_lock();
+
+ for (iter = iter_start; iter < iter_stop; iter++) {
+ struct static_key *iterk = jump_entry_key(iter);
+
+ if (!is_kernel_ro_after_init((unsigned long)iterk))
+ continue;
+
+ if (static_key_sealed(iterk))
+ continue;
+
+ static_key_seal(iterk);
+ }
+
+ jump_label_unlock();
+ cpus_read_unlock();
+}
+
#ifdef CONFIG_MODULES

enum jump_label_type jump_label_init_type(struct jump_entry *entry)
@@ -650,6 +689,15 @@ static int jump_label_add_module(struct module *mod)
static_key_set_entries(key, iter);
continue;
}
+
+ /*
+ * If the key was sealed at init, then there's no need to keep a
+ * reference to its module entries - just patch them now and be
+ * done with it.
+ */
+ if (static_key_sealed(key))
+ goto do_poke;
+
jlm = kzalloc(sizeof(struct static_key_mod), GFP_KERNEL);
if (!jlm)
return -ENOMEM;
@@ -675,6 +723,7 @@ static int jump_label_add_module(struct module *mod)
static_key_set_linked(key);

/* Only update if we've changed from our initial state */
+do_poke:
if (jump_label_type(iter) != jump_label_init_type(iter))
__jump_label_update(key, iter, iter_stop, true);
}
@@ -699,6 +748,10 @@ static void jump_label_del_module(struct module *mod)
if (within_module((unsigned long)key, mod))
continue;

+ /* No @jlm allocated because key was sealed at init. */
+ if (static_key_sealed(key))
+ continue;
+
/* No memory during module load */
if (WARN_ON(!static_key_linked(key)))
continue;

2024-03-22 11:08:39

by tip-bot2 for Jacob Pan

[permalink] [raw]
Subject: [tip: locking/core] context_tracking: Make context_tracking_key __ro_after_init

The following commit has been merged into the locking/core branch of tip:

Commit-ID: 4eab44a8ae98df53e59f6af3c860142177235507
Gitweb: https://git.kernel.org/tip/4eab44a8ae98df53e59f6af3c860142177235507
Author: Valentin Schneider <[email protected]>
AuthorDate: Wed, 13 Mar 2024 19:01:04 +01:00
Committer: Ingo Molnar <[email protected]>
CommitterDate: Fri, 22 Mar 2024 11:18:18 +01:00

context_tracking: Make context_tracking_key __ro_after_init

context_tracking_key is only ever enabled in __init ct_cpu_tracker_user(),
so mark it as __ro_after_init.

Signed-off-by: Valentin Schneider <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Acked-by: Josh Poimboeuf <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
kernel/context_tracking.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c
index 70ae70d..24b1e11 100644
--- a/kernel/context_tracking.c
+++ b/kernel/context_tracking.c
@@ -432,7 +432,7 @@ static __always_inline void ct_kernel_enter(bool user, int offset) { }
#define CREATE_TRACE_POINTS
#include <trace/events/context_tracking.h>

-DEFINE_STATIC_KEY_FALSE(context_tracking_key);
+DEFINE_STATIC_KEY_FALSE_RO(context_tracking_key);
EXPORT_SYMBOL_GPL(context_tracking_key);

static noinstr bool context_tracking_recursion_enter(void)