2023-04-16 17:26:30

by David Keisar Schm

[permalink] [raw]
Subject: [PATCH v6 3/3] arch/x86/mm/kaslr: use siphash instead of prandom_bytes_state

From: David Keisar Schmidt <[email protected]>

The memory randomization of the virtual address space
of kernel memory regions (physical memory mapping, vmalloc & vmemmap) inside
arch/x86/mm/kaslr.c is based on the function prandom_bytes_state which uses
the prandom_u32 PRNG.

However, the seeding here is done by calling prandom_seed_state,
which effectively uses only 32bits of the seed, which means that observing ONE
region's offset (say 30 bits) can provide the attacker with 2 possible seeds
(from which the attacker can calculate the remaining two regions)

In order to fix it, we have replaced the two invocations of prandom_bytes_state and prandom_seed_state
with siphash, which is considered more secure.
Besides, the original code used the same pseudo-random number in every iteration,
so to add some additional randomization
we call siphash every iteration, hashing the iteration index with the described key.

Signed-off-by: David Keisar Schmidt <[email protected]>
---
Changes since v5:
* deleted irrelevant changes which were appended accidentally.

Changes since v4:
* replaced the call to prandom_bytes_state and prandom_seed_state,
with siphash.

Changes since v2:
* edited commit message.

arch/x86/mm/kaslr.c | 21 +++++++++++++++------
1 file changed, 15 insertions(+), 6 deletions(-)

diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
index 557f0fe25..fb551796c 100644
--- a/arch/x86/mm/kaslr.c
+++ b/arch/x86/mm/kaslr.c
@@ -25,6 +25,7 @@
#include <linux/random.h>
#include <linux/memblock.h>
#include <linux/pgtable.h>
+#include <linux/siphash.h>

#include <asm/setup.h>
#include <asm/kaslr.h>
@@ -66,9 +67,14 @@ void __init kernel_randomize_memory(void)
size_t i;
unsigned long vaddr_start, vaddr;
unsigned long rand, memory_tb;
- struct rnd_state rand_state;
unsigned long remain_entropy;
unsigned long vmemmap_size;
+ /*
+ * Create a Siphash key. We use a mask of PI digits to add some
+ * randomness to the key.
+ */
+ u64 seed = (u64) kaslr_get_random_long("Memory");
+ siphash_key_t key = {{seed, seed ^ 0x3141592653589793UL}};

vaddr_start = pgtable_l5_enabled() ? __PAGE_OFFSET_BASE_L5 : __PAGE_OFFSET_BASE_L4;
vaddr = vaddr_start;
@@ -94,7 +100,7 @@ void __init kernel_randomize_memory(void)
*/
BUG_ON(kaslr_regions[0].base != &page_offset_base);
memory_tb = DIV_ROUND_UP(max_pfn << PAGE_SHIFT, 1UL << TB_SHIFT) +
- CONFIG_RANDOMIZE_MEMORY_PHYSICAL_PADDING;
+ CONFIG_RANDOMIZE_MEMORY_PHYSICAL_PADDING;

/* Adapt physical memory region size based on available memory */
if (memory_tb < kaslr_regions[0].size_tb)
@@ -105,7 +111,7 @@ void __init kernel_randomize_memory(void)
* boundary.
*/
vmemmap_size = (kaslr_regions[0].size_tb << (TB_SHIFT - PAGE_SHIFT)) *
- sizeof(struct page);
+ sizeof(struct page);
kaslr_regions[2].size_tb = DIV_ROUND_UP(vmemmap_size, 1UL << TB_SHIFT);

/* Calculate entropy available between regions */
@@ -113,8 +119,6 @@ void __init kernel_randomize_memory(void)
for (i = 0; i < ARRAY_SIZE(kaslr_regions); i++)
remain_entropy -= get_padding(&kaslr_regions[i]);

- prandom_seed_state(&rand_state, kaslr_get_random_long("Memory"));
-
for (i = 0; i < ARRAY_SIZE(kaslr_regions); i++) {
unsigned long entropy;

@@ -123,7 +127,12 @@ void __init kernel_randomize_memory(void)
* available.
*/
entropy = remain_entropy / (ARRAY_SIZE(kaslr_regions) - i);
- prandom_bytes_state(&rand_state, &rand, sizeof(rand));
+ /*
+ * Use Siphash to generate a pseudo-random number every
+ * iteration
+ *
+ */
+ rand = siphash_1u64(i, &key);
entropy = (rand % (entropy + 1)) & PUD_MASK;
vaddr += entropy;
*kaslr_regions[i].base = vaddr;
--
2.37.3


2023-04-16 17:33:51

by Jason A. Donenfeld

[permalink] [raw]
Subject: Re: [PATCH v6 3/3] arch/x86/mm/kaslr: use siphash instead of prandom_bytes_state

On 4/16/23, [email protected]
<[email protected]> wrote:
> From: David Keisar Schmidt <[email protected]>
>
> However, the seeding here is done by calling prandom_seed_state,
> which effectively uses only 32bits of the seed, which means that observing
> ONE
> region's offset (say 30 bits) can provide the attacker with 2 possible
> seeds
> (from which the attacker can calculate the remaining two regions)
>
> In order to fix it, we have replaced the two invocations of
> prandom_bytes_state and prandom_seed_state
> with siphash, which is considered more secure.
> Besides, the original code used the same pseudo-random number in every
> iteration,
> so to add some additional randomization
> we call siphash every iteration, hashing the iteration index with the
> described key.
>
>

Nack. Please don't add bespoke new RNG constructions willy nilly. I
just spent a while cleaning this kind of thing up.

2023-04-23 07:41:38

by David Keisar Schm

[permalink] [raw]
Subject: Re: [PATCH v6 3/3] arch/x86/mm/kaslr: use siphash instead of prandom_bytes_state

On Sun, Apr 16, 2023 at 8:26 PM Jason A. Donenfeld <[email protected]> wrote:
>
> On 4/16/23, [email protected]
> <[email protected]> wrote:
> > From: David Keisar Schmidt <[email protected]>
> >
> > However, the seeding here is done by calling prandom_seed_state,
> > which effectively uses only 32bits of the seed, which means that observing
> > ONE
> > region's offset (say 30 bits) can provide the attacker with 2 possible
> > seeds
> > (from which the attacker can calculate the remaining two regions)
> >
> > In order to fix it, we have replaced the two invocations of
> > prandom_bytes_state and prandom_seed_state
> > with siphash, which is considered more secure.
> > Besides, the original code used the same pseudo-random number in every
> > iteration,
> > so to add some additional randomization
> > we call siphash every iteration, hashing the iteration index with the
> > described key.
> >
> >
>
> Nack. Please don't add bespoke new RNG constructions willy nilly. I
> just spent a while cleaning this kind of thing up.

Hi Jason,

Thank you for reviewing our revised patch. We appreciate your concern
regarding the use of custom RNG constructions, and we understand the
potential issues that could arise from doing so.

However, we wanted to clarify that our intention was to use a
deterministic PRNG that meets Kees Cook's requirements for debugging
and performance analysis purposes.

We also acknowledge that using a custom RNG could introduce additional
risks, and we're open to exploring alternative solutions that meet our
requirements.

If you have any suggestions for a more secure and deterministic RNG,
we'd be happy to hear them and implement them.