2018-01-15 14:09:44

by Chao Fan

[permalink] [raw]
Subject: [PATCH v6 0/5] kaslr: add parameter kaslr_mem=nn[KMG][@|!ss[KMG]]

In current code, kaslr may cause some features can't work because
the wrong position choosen by kaslr for extracting kernel. So add
kaslr_mem=nn[KMG][@|!ss[KMG]]. Users can specify the memory regions
that choosen by kaslr with kaslr_mem=nn[KMG]@[KMG], and
kaslr_mem=nn[KMG]!ss[KMG] to specify the regions where shouldn't
be choosen by kaslr.

Firstly, here is a problem:
Here is a machine with several NUMA nodes and some of them are
hot-pluggable. It's not good for kernel to be extracted in the memory
region of movable node. But in current code, I print the address chosen by
kaslr and found it may be placed in movable node sometimes.
To solve this problem, it's better to limit the memory region chosen by
kaslr to immovable node in kaslr.c. But the memory information about if
it's hot-pluggable is stored in ACPI SRAT table, which is parsed after
kernel is extracted. So we can't get the detail memory information
before extracting kernel.

So add the new parameter immovable_mem=nn@ss, in which nn means
the size of memory in *immovable* node, and ss means the start position of
this memory region. Then limit kaslr choose memory in these regions.

There are two policies:
1. Specify the memory region in *movable* node to avoid:
Then we can use the existing mem_avoid to handle. But if the memory
on movable node was separated by memory hole or different movable nodes
are discontinuous, we don't know how many regions need to avoid.
OTOH, we must avoid all of the movable memory, otherwise, kaslr may
choose the wrong place.
2. Specify the memory region in *immovable* node to select:
Only support 4 regions in this parameter. Then user can use two nodes
at least for kaslr to choose, it's enough for the kernel to extract.
At the same time, because we need only 4 new mem_vector, the usage
of memory here is not too big. So I think this way is better, and this
patchset is based on this policy.

Then there is another problem about 1G huge page:
https://lkml.org/lkml/2018/1/4/236
KASLR may choose the memory region, which is the only suitable region
for 1G huge page. So I add the new patch 5/5 to store the regions
to mem_avoid. So users can specify kaslr_mem=nn!ss to store the regions
that reserved for 1G huge page or other features.

PATCH 1/5 parse the new parameter kaslr_mem=nn[KMG]@ss[KMG], then
store the memory regions.
PATCH 2/5 give a warrning if movable_node specified without kaslr_mem=
PATCH 3/5 skip mirror feature if movable_node or immovable_mem specified.
PATCH 4/5 calculate the memory regions and choose the regions specified.
PATCH 5/5 add kaslr_mem=nn[KMG]!ss[KMG] to specify the mem_avoid.

v1->v2:
Follow Dou Liyang's suggestion:
- Add the parse for movable_node=nn[KMG] without @ss[KMG]
- Fix the bug for more than one "movable_node=" specified
- Drop useless variables and use mem_vector region directely
- Add more comments.

v2->v3:
Follow Baoquan He's suggestion:
- Change names of several functions.
- Add a new parameter "immovable_mem" instead of extending mvoable_node
- Use the clamp to calculate the memory intersecting, which makes
logical more clear.
- Disable memory mirror if movable_node specified

v3->v4:
Follow Kees's suggestion:
- Put the functions variables of immovable_mem to #ifdef
CONFIG_MEMORY_HOTPLUG and change some code place
- Change the name of "process_mem_region" to "slots_count"
- Reanme the new function "process_immovable_mem" to "process_mem_region"
Follow Baoquan's suggestion:
- Fail KASLR if "movable_node" specified without "immovable_mem"
- Ajust the code place of handling mem_region directely if no
immovable_mem specified
Follow Randy's suggestion:
- Change the mistake and add detailed description for the document.

v4->v5:
- Change the problem reported by LKP
Follow Dou's suggestion:
- Also return if match "movable_node" when parsing kernel commandline
in handle_mem_filter without define CONFIG_MEMORY_HOTPLUG

v5->v6:
- Add the last patch to save the avoid memory regions.


Chao Fan (5):
kaslr: add kaslr_mem=nn[KMG]@ss[KMG] to specify extracting memory
kaslr: give a warning if movable_node specified without kaslr_mem=
kaslr: disable memory mirror feature when movable_node
kaslr: calculate the memory region in kaslr_mem
kaslr: add kaslr_mem=nn[KMG]!ss[KMG] to avoid memory regions

arch/x86/boot/compressed/kaslr.c | 198 ++++++++++++++++++++++++++++++++++++---
1 file changed, 184 insertions(+), 14 deletions(-)

--
2.14.3




2018-01-15 12:41:23

by Chao Fan

[permalink] [raw]
Subject: [PATCH v6 2/5] kaslr: give a warning if movable_node specified without kaslr_mem=

Signed-off-by: Chao Fan <[email protected]>
---
arch/x86/boot/compressed/kaslr.c | 10 ++++++++++
1 file changed, 10 insertions(+)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index b071f6edd7b2..38816d3f8865 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -282,6 +282,16 @@ static int handle_mem_filter(void)
!strstr(args, "kaslr_mem="))
return 0;

+#ifdef CONFIG_MEMORY_HOTPLUG
+ /*
+ * Check if "kaslr_mem=" specified when "movable_node" found. If not,
+ * just give warrning. Otherwise memory hotplug could be
+ * affected if kernel put on movable memory regions.
+ */
+ if (strstr(args, "movable_node") && !strstr(args, "kaslr_mem="))
+ warn("kaslr_mem= should specified when using movable_node.\n");
+#endif
+
tmp_cmdline = malloc(len + 1);
if (!tmp_cmdline)
error("Failed to allocate space for tmp_cmdline");
--
2.14.3



2018-01-15 12:41:30

by Chao Fan

[permalink] [raw]
Subject: [PATCH v6 4/5] kaslr: calculate the memory region in kaslr_mem

If there is no kaslr_mem= region specified, use region directely.
Otherwise, calculate the intersecting between efi or e820 memmap entry
and kaslr_mem.

Rename process_mem_region to slots_count to match
slots_fetch_random, and rename new function sa process_mem_region.

Signed-off-by: Chao Fan <[email protected]>
---
arch/x86/boot/compressed/kaslr.c | 64 +++++++++++++++++++++++++++++++++-------
1 file changed, 53 insertions(+), 11 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 5615f26364f9..fc531fa1f10c 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -558,9 +558,9 @@ static unsigned long slots_fetch_random(void)
return 0;
}

-static void process_mem_region(struct mem_vector *entry,
- unsigned long minimum,
- unsigned long image_size)
+static void slots_count(struct mem_vector *entry,
+ unsigned long minimum,
+ unsigned long image_size)
{
struct mem_vector region, overlap;
struct slot_area slot_area;
@@ -637,6 +637,52 @@ static void process_mem_region(struct mem_vector *entry,
}
}

+static bool process_mem_region(struct mem_vector region,
+ unsigned long long minimum,
+ unsigned long long image_size)
+{
+ /*
+ * If kaslr_mem= specified, walk all the regions, and
+ * filter the intersection to slots_count.
+ */
+ if (num_usable_region > 0) {
+ int i;
+
+ for (i = 0; i < num_usable_region; i++) {
+ struct mem_vector entry;
+ unsigned long long start, end, entry_end, region_end;
+
+ start = mem_usable[i].start;
+ end = start + mem_usable[i].size;
+ region_end = region.start + region.size;
+
+ entry.start = clamp(region.start, start, end);
+ entry_end = clamp(region_end, start, end);
+
+ if (entry.start < entry_end) {
+ entry.size = entry_end - entry.start;
+ slots_count(&entry, minimum, image_size);
+ }
+
+ if (slot_area_index == MAX_SLOT_AREA) {
+ debug_putstr("Aborted memmap scan (slot_areas full)!\n");
+ return 1;
+ }
+ }
+ return 0;
+ }
+
+ /*
+ * If no kaslr_mem stored, use region directly
+ */
+ slots_count(&region, minimum, image_size);
+ if (slot_area_index == MAX_SLOT_AREA) {
+ debug_putstr("Aborted memmap scan (slot_areas full)!\n");
+ return 1;
+ }
+ return 0;
+}
+
#ifdef CONFIG_EFI
/*
* Returns true if mirror region found (and must have been processed
@@ -709,11 +755,9 @@ process_efi_entries(unsigned long minimum, unsigned long image_size)

region.start = md->phys_addr;
region.size = md->num_pages << EFI_PAGE_SHIFT;
- process_mem_region(&region, minimum, image_size);
- if (slot_area_index == MAX_SLOT_AREA) {
- debug_putstr("Aborted EFI scan (slot_areas full)!\n");
+
+ if (process_mem_region(region, minimum, image_size))
break;
- }
}
return true;
}
@@ -740,11 +784,9 @@ static void process_e820_entries(unsigned long minimum,
continue;
region.start = entry->addr;
region.size = entry->size;
- process_mem_region(&region, minimum, image_size);
- if (slot_area_index == MAX_SLOT_AREA) {
- debug_putstr("Aborted e820 scan (slot_areas full)!\n");
+
+ if (process_mem_region(region, minimum, image_size))
break;
- }
}
}

--
2.14.3



2018-01-15 12:50:37

by Chao Fan

[permalink] [raw]
Subject: Re: [PATCH v6 5/5] kaslr: add kaslr_mem=nn[KMG]!ss[KMG] to avoid memory regions

Hi Luiz,

I don't know if this patch is OK for you.
Of coure you can only use kaslr_mem=nn@ss to solve the 1G huge page
issue. Because we know the region [0,1G] is not suitable for 1G huge
page, so you can specify ksalr_mem=1G@0 of kaslr_mem=1G to solve
your problem. But the regions may be too slow and is not good
for the randomness.

So as Kess said, I put the regions suitable for 1G huge page to
mem_avoid, you can use kaslr_mem=1G!1G to solve the problem in your
email.

Thanks,
Chao Fan

On Mon, Jan 15, 2018 at 08:40:16PM +0800, Chao Fan wrote:
>In current code, kaslr choose the only suitable memory region for 1G
>huge page, so the no suitable region for 1G huge page. So add this
>feature to store these regions.
>
>Of coure, we can use memmap= to do this job. But memmap will be handled
>in the later code, but kaslr_mem= only works in this period.
>
>It can help users to avoid more memory regions, not only the 1G huge
>huge page issue.
>
>Signed-off-by: Chao Fan <[email protected]>
>---
> arch/x86/boot/compressed/kaslr.c | 56 +++++++++++++++++++++++++++++++++++-----
> 1 file changed, 50 insertions(+), 6 deletions(-)
>
>diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
>index fc531fa1f10c..c71189cf8d56 100644
>--- a/arch/x86/boot/compressed/kaslr.c
>+++ b/arch/x86/boot/compressed/kaslr.c
>@@ -95,6 +95,18 @@ static bool memmap_too_large;
> /* Store memory limit specified by "mem=nn[KMG]" or "memmap=nn[KMG]" */
> unsigned long long mem_limit = ULLONG_MAX;
>
>+/*
>+ * Only supporting at most 4 unusable memory regions for
>+ * "kaslr_mem=nn[KMG]!ss[KMG]"
>+ */
>+#define MAX_KASLR_MEM_AVOID 4
>+
>+static bool kaslr_mem_avoid_too_large;
>+
>+enum kaslr_mem_type {
>+ CMD_MEM_USABLE = 1,
>+ CMD_MEM_AVOID,
>+};
>
> enum mem_avoid_index {
> MEM_AVOID_ZO_RANGE = 0,
>@@ -103,6 +115,8 @@ enum mem_avoid_index {
> MEM_AVOID_BOOTPARAMS,
> MEM_AVOID_MEMMAP_BEGIN,
> MEM_AVOID_MEMMAP_END = MEM_AVOID_MEMMAP_BEGIN + MAX_MEMMAP_REGIONS - 1,
>+ MEM_AVOID_KASLR_MEM_BEGIN,
>+ MEM_AVOID_KASLR_MEM_END = MEM_AVOID_KASLR_MEM_BEGIN + MAX_KASLR_MEM_AVOID - 1,
> MEM_AVOID_MAX,
> };
>
>@@ -217,7 +231,8 @@ static void mem_avoid_memmap(char *str)
>
> static int parse_kaslr_mem(char *p,
> unsigned long long *start,
>- unsigned long long *size)
>+ unsigned long long *size,
>+ int *cmd_type)
> {
> char *oldp;
>
>@@ -230,8 +245,13 @@ static int parse_kaslr_mem(char *p,
> return -EINVAL;
>
> switch (*p) {
>+ case '!' :
>+ *start = memparse(p + 1, &p);
>+ *cmd_type = CMD_MEM_AVOID;
>+ return 0;
> case '@':
> *start = memparse(p + 1, &p);
>+ *cmd_type = CMD_MEM_USABLE;
> return 0;
> default:
> /*
>@@ -240,6 +260,7 @@ static int parse_kaslr_mem(char *p,
> * the region starts from 0.
> */
> *start = 0;
>+ *cmd_type = CMD_MEM_USABLE;
> return 0;
> }
>
>@@ -248,26 +269,44 @@ static int parse_kaslr_mem(char *p,
>
> static void parse_kaslr_mem_regions(char *str)
> {
>- static int i;
>+ static int i = 0, j = 0;
>+ int cmd_type = 0;
>
> while (str && (i < MAX_KASLR_MEM_USABLE)) {
> int rc;
> unsigned long long start, size;
> char *k = strchr(str, ',');
>
>+ if (i >= MAX_KASLR_MEM_USABLE && j >= MAX_KASLR_MEM_AVOID)
>+ break;
>+
> if (k)
> *k++ = 0;
>
>- rc = parse_kaslr_mem(str, &start, &size);
>+ rc = parse_kaslr_mem(str, &start, &size, &cmd_type);
> if (rc < 0)
> break;
> str = k;
>
>- mem_usable[i].start = start;
>- mem_usable[i].size = size;
>- i++;
>+ if (cmd_type == CMD_MEM_USABLE) {
>+ if (i >= MAX_KASLR_MEM_USABLE)
>+ continue;
>+ mem_usable[i].start = start;
>+ mem_usable[i].size = size;
>+ i++;
>+ } else if (cmd_type == CMD_MEM_AVOID) {
>+ if (j >= MAX_KASLR_MEM_AVOID)
>+ continue;
>+ mem_avoid[MEM_AVOID_KASLR_MEM_BEGIN + j].start = start;
>+ mem_avoid[MEM_AVOID_KASLR_MEM_BEGIN + j].size = size;
>+ j++;
>+ }
> }
> num_usable_region = i;
>+
>+ /* More than 4 kaslr_mem avoid, fail kaslr */
>+ if ((j >= MAX_KASLR_MEM_AVOID) && str)
>+ kaslr_mem_avoid_too_large = true;
> }
>
> static int handle_mem_filter(void)
>@@ -799,6 +838,11 @@ static unsigned long find_random_phys_addr(unsigned long minimum,
> return 0;
> }
>
>+ /* Check if we had too many kaslr_mem avoid. */
>+ if (kaslr_mem_avoid_too_large) {
>+ debug_putstr("Aborted memory entries scan (more than 4 kaslr_mem avoid args)!\n");
>+ return 0;
>+ }
> /* Make sure minimum is aligned. */
> minimum = ALIGN(minimum, CONFIG_PHYSICAL_ALIGN);
>
>--
>2.14.3
>


2018-01-15 13:43:29

by Chao Fan

[permalink] [raw]
Subject: Re: [PATCH v6 1/5] kaslr: add kaslr_mem=nn[KMG]@ss[KMG] to specify extracting memory

On Mon, Jan 15, 2018 at 08:40:12PM +0800, Chao Fan wrote:
>In current code, kaslr only has a method to avoid some memory regions,
>but no method to specify the regions for kaslr to extract. So kaslr
>may choose the wrong position sometimes, which will cause some other
>features fail.
>
>Here is a problem that kaslr may choose the memory region in movable
>nodes to extract kernel, which will make the nodes can't be hot-removed.
>To solve it, we can specify the memory region in immovable node.
>Create "kaslr_mem=" to store the regions in immovable nodes, where should
>be chosen by kaslr.
>
>Also change the "handle_mem_memmap" to "handle_mem_filter", since
>it will not only handle memmap parameter now.
>
>Multiple regions can be specified, comma delimited.
>Considering the usage of memory, only support for 4 regions.
>4 regions contains 2 nodes at least, enough for kernel to extract.
>
>Signed-off-by: Chao Fan <[email protected]>
>---
> arch/x86/boot/compressed/kaslr.c | 73 ++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 70 insertions(+), 3 deletions(-)
>
>diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
>index 8199a6187251..b071f6edd7b2 100644
>--- a/arch/x86/boot/compressed/kaslr.c
>+++ b/arch/x86/boot/compressed/kaslr.c
>@@ -108,6 +108,15 @@ enum mem_avoid_index {
>
> static struct mem_vector mem_avoid[MEM_AVOID_MAX];
>
>+/* Only support at most 4 usable memory regions specified for kaslr */
>+#define MAX_KASLR_MEM_USABLE 4
>+
>+/* Store the usable memory regions for kaslr */
>+static struct mem_vector mem_usable[MAX_KASLR_MEM_USABLE];

Here it may should be "kaslr_mem", but there is mem_avoid in current
code, so I name it as "mem_usable" to be symmetrical with "mem_avoid".
We can see more in PATCH 05.

Thanks,
Chao Fan

>+
>+/* The amount of usable regions for kaslr user specify, not more than 4 */
>+static int num_usable_region;
>+
> static bool mem_overlaps(struct mem_vector *one, struct mem_vector *two)
> {
> /* Item one is entirely before item two. */
>@@ -206,7 +215,62 @@ static void mem_avoid_memmap(char *str)
> memmap_too_large = true;
> }
>
>-static int handle_mem_memmap(void)
>+static int parse_kaslr_mem(char *p,
>+ unsigned long long *start,
>+ unsigned long long *size)
>+{
>+ char *oldp;
>+
>+ if (!p)
>+ return -EINVAL;
>+
>+ oldp = p;
>+ *size = memparse(p, &p);
>+ if (p == oldp)
>+ return -EINVAL;
>+
>+ switch (*p) {
>+ case '@':
>+ *start = memparse(p + 1, &p);
>+ return 0;
>+ default:
>+ /*
>+ * If w/o offset, only size specified, kaslr_mem=nn[KMG]
>+ * has the same behaviour as kaslr_mem=nn[KMG]@0. It means
>+ * the region starts from 0.
>+ */
>+ *start = 0;
>+ return 0;
>+ }
>+
>+ return -EINVAL;
>+}
>+
>+static void parse_kaslr_mem_regions(char *str)
>+{
>+ static int i;
>+
>+ while (str && (i < MAX_KASLR_MEM_USABLE)) {
>+ int rc;
>+ unsigned long long start, size;
>+ char *k = strchr(str, ',');
>+
>+ if (k)
>+ *k++ = 0;
>+
>+ rc = parse_kaslr_mem(str, &start, &size);
>+ if (rc < 0)
>+ break;
>+ str = k;
>+
>+ mem_usable[i].start = start;
>+ mem_usable[i].size = size;
>+ i++;
>+ }
>+ num_usable_region = i;
>+}
>+
>+static int handle_mem_filter(void)
> {
> char *args = (char *)get_cmd_line_ptr();
> size_t len = strlen((char *)args);
>@@ -214,7 +278,8 @@ static int handle_mem_memmap(void)
> char *param, *val;
> u64 mem_size;
>
>- if (!strstr(args, "memmap=") && !strstr(args, "mem="))
>+ if (!strstr(args, "memmap=") && !strstr(args, "mem=") &&
>+ !strstr(args, "kaslr_mem="))
> return 0;
>
> tmp_cmdline = malloc(len + 1);
>@@ -239,6 +304,8 @@ static int handle_mem_memmap(void)
>
> if (!strcmp(param, "memmap")) {
> mem_avoid_memmap(val);
>+ } else if (!strcmp(param, "kaslr_mem")) {
>+ parse_kaslr_mem_regions(val);
> } else if (!strcmp(param, "mem")) {
> char *p = val;
>
>@@ -378,7 +445,7 @@ static void mem_avoid_init(unsigned long input, unsigned long input_size,
> /* We don't need to set a mapping for setup_data. */
>
> /* Mark the memmap regions we need to avoid */
>- handle_mem_memmap();
>+ handle_mem_filter();
>
> #ifdef CONFIG_X86_VERBOSE_BOOTUP
> /* Make sure video RAM can be used. */
>--
>2.14.3
>


2018-01-15 12:41:28

by Chao Fan

[permalink] [raw]
Subject: [PATCH v6 5/5] kaslr: add kaslr_mem=nn[KMG]!ss[KMG] to avoid memory regions

In current code, kaslr choose the only suitable memory region for 1G
huge page, so the no suitable region for 1G huge page. So add this
feature to store these regions.

Of coure, we can use memmap= to do this job. But memmap will be handled
in the later code, but kaslr_mem= only works in this period.

It can help users to avoid more memory regions, not only the 1G huge
huge page issue.

Signed-off-by: Chao Fan <[email protected]>
---
arch/x86/boot/compressed/kaslr.c | 56 +++++++++++++++++++++++++++++++++++-----
1 file changed, 50 insertions(+), 6 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index fc531fa1f10c..c71189cf8d56 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -95,6 +95,18 @@ static bool memmap_too_large;
/* Store memory limit specified by "mem=nn[KMG]" or "memmap=nn[KMG]" */
unsigned long long mem_limit = ULLONG_MAX;

+/*
+ * Only supporting at most 4 unusable memory regions for
+ * "kaslr_mem=nn[KMG]!ss[KMG]"
+ */
+#define MAX_KASLR_MEM_AVOID 4
+
+static bool kaslr_mem_avoid_too_large;
+
+enum kaslr_mem_type {
+ CMD_MEM_USABLE = 1,
+ CMD_MEM_AVOID,
+};

enum mem_avoid_index {
MEM_AVOID_ZO_RANGE = 0,
@@ -103,6 +115,8 @@ enum mem_avoid_index {
MEM_AVOID_BOOTPARAMS,
MEM_AVOID_MEMMAP_BEGIN,
MEM_AVOID_MEMMAP_END = MEM_AVOID_MEMMAP_BEGIN + MAX_MEMMAP_REGIONS - 1,
+ MEM_AVOID_KASLR_MEM_BEGIN,
+ MEM_AVOID_KASLR_MEM_END = MEM_AVOID_KASLR_MEM_BEGIN + MAX_KASLR_MEM_AVOID - 1,
MEM_AVOID_MAX,
};

@@ -217,7 +231,8 @@ static void mem_avoid_memmap(char *str)

static int parse_kaslr_mem(char *p,
unsigned long long *start,
- unsigned long long *size)
+ unsigned long long *size,
+ int *cmd_type)
{
char *oldp;

@@ -230,8 +245,13 @@ static int parse_kaslr_mem(char *p,
return -EINVAL;

switch (*p) {
+ case '!' :
+ *start = memparse(p + 1, &p);
+ *cmd_type = CMD_MEM_AVOID;
+ return 0;
case '@':
*start = memparse(p + 1, &p);
+ *cmd_type = CMD_MEM_USABLE;
return 0;
default:
/*
@@ -240,6 +260,7 @@ static int parse_kaslr_mem(char *p,
* the region starts from 0.
*/
*start = 0;
+ *cmd_type = CMD_MEM_USABLE;
return 0;
}

@@ -248,26 +269,44 @@ static int parse_kaslr_mem(char *p,

static void parse_kaslr_mem_regions(char *str)
{
- static int i;
+ static int i = 0, j = 0;
+ int cmd_type = 0;

while (str && (i < MAX_KASLR_MEM_USABLE)) {
int rc;
unsigned long long start, size;
char *k = strchr(str, ',');

+ if (i >= MAX_KASLR_MEM_USABLE && j >= MAX_KASLR_MEM_AVOID)
+ break;
+
if (k)
*k++ = 0;

- rc = parse_kaslr_mem(str, &start, &size);
+ rc = parse_kaslr_mem(str, &start, &size, &cmd_type);
if (rc < 0)
break;
str = k;

- mem_usable[i].start = start;
- mem_usable[i].size = size;
- i++;
+ if (cmd_type == CMD_MEM_USABLE) {
+ if (i >= MAX_KASLR_MEM_USABLE)
+ continue;
+ mem_usable[i].start = start;
+ mem_usable[i].size = size;
+ i++;
+ } else if (cmd_type == CMD_MEM_AVOID) {
+ if (j >= MAX_KASLR_MEM_AVOID)
+ continue;
+ mem_avoid[MEM_AVOID_KASLR_MEM_BEGIN + j].start = start;
+ mem_avoid[MEM_AVOID_KASLR_MEM_BEGIN + j].size = size;
+ j++;
+ }
}
num_usable_region = i;
+
+ /* More than 4 kaslr_mem avoid, fail kaslr */
+ if ((j >= MAX_KASLR_MEM_AVOID) && str)
+ kaslr_mem_avoid_too_large = true;
}

static int handle_mem_filter(void)
@@ -799,6 +838,11 @@ static unsigned long find_random_phys_addr(unsigned long minimum,
return 0;
}

+ /* Check if we had too many kaslr_mem avoid. */
+ if (kaslr_mem_avoid_too_large) {
+ debug_putstr("Aborted memory entries scan (more than 4 kaslr_mem avoid args)!\n");
+ return 0;
+ }
/* Make sure minimum is aligned. */
minimum = ALIGN(minimum, CONFIG_PHYSICAL_ALIGN);

--
2.14.3



2018-01-15 14:09:05

by Chao Fan

[permalink] [raw]
Subject: [PATCH v6 1/5] kaslr: add kaslr_mem=nn[KMG]@ss[KMG] to specify extracting memory

In current code, kaslr only has a method to avoid some memory regions,
but no method to specify the regions for kaslr to extract. So kaslr
may choose the wrong position sometimes, which will cause some other
features fail.

Here is a problem that kaslr may choose the memory region in movable
nodes to extract kernel, which will make the nodes can't be hot-removed.
To solve it, we can specify the memory region in immovable node.
Create "kaslr_mem=" to store the regions in immovable nodes, where should
be chosen by kaslr.

Also change the "handle_mem_memmap" to "handle_mem_filter", since
it will not only handle memmap parameter now.

Multiple regions can be specified, comma delimited.
Considering the usage of memory, only support for 4 regions.
4 regions contains 2 nodes at least, enough for kernel to extract.

Signed-off-by: Chao Fan <[email protected]>
---
arch/x86/boot/compressed/kaslr.c | 73 ++++++++++++++++++++++++++++++++++++++--
1 file changed, 70 insertions(+), 3 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 8199a6187251..b071f6edd7b2 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -108,6 +108,15 @@ enum mem_avoid_index {

static struct mem_vector mem_avoid[MEM_AVOID_MAX];

+/* Only support at most 4 usable memory regions specified for kaslr */
+#define MAX_KASLR_MEM_USABLE 4
+
+/* Store the usable memory regions for kaslr */
+static struct mem_vector mem_usable[MAX_KASLR_MEM_USABLE];
+
+/* The amount of usable regions for kaslr user specify, not more than 4 */
+static int num_usable_region;
+
static bool mem_overlaps(struct mem_vector *one, struct mem_vector *two)
{
/* Item one is entirely before item two. */
@@ -206,7 +215,62 @@ static void mem_avoid_memmap(char *str)
memmap_too_large = true;
}

-static int handle_mem_memmap(void)
+static int parse_kaslr_mem(char *p,
+ unsigned long long *start,
+ unsigned long long *size)
+{
+ char *oldp;
+
+ if (!p)
+ return -EINVAL;
+
+ oldp = p;
+ *size = memparse(p, &p);
+ if (p == oldp)
+ return -EINVAL;
+
+ switch (*p) {
+ case '@':
+ *start = memparse(p + 1, &p);
+ return 0;
+ default:
+ /*
+ * If w/o offset, only size specified, kaslr_mem=nn[KMG]
+ * has the same behaviour as kaslr_mem=nn[KMG]@0. It means
+ * the region starts from 0.
+ */
+ *start = 0;
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static void parse_kaslr_mem_regions(char *str)
+{
+ static int i;
+
+ while (str && (i < MAX_KASLR_MEM_USABLE)) {
+ int rc;
+ unsigned long long start, size;
+ char *k = strchr(str, ',');
+
+ if (k)
+ *k++ = 0;
+
+ rc = parse_kaslr_mem(str, &start, &size);
+ if (rc < 0)
+ break;
+ str = k;
+
+ mem_usable[i].start = start;
+ mem_usable[i].size = size;
+ i++;
+ }
+ num_usable_region = i;
+}
+
+static int handle_mem_filter(void)
{
char *args = (char *)get_cmd_line_ptr();
size_t len = strlen((char *)args);
@@ -214,7 +278,8 @@ static int handle_mem_memmap(void)
char *param, *val;
u64 mem_size;

- if (!strstr(args, "memmap=") && !strstr(args, "mem="))
+ if (!strstr(args, "memmap=") && !strstr(args, "mem=") &&
+ !strstr(args, "kaslr_mem="))
return 0;

tmp_cmdline = malloc(len + 1);
@@ -239,6 +304,8 @@ static int handle_mem_memmap(void)

if (!strcmp(param, "memmap")) {
mem_avoid_memmap(val);
+ } else if (!strcmp(param, "kaslr_mem")) {
+ parse_kaslr_mem_regions(val);
} else if (!strcmp(param, "mem")) {
char *p = val;

@@ -378,7 +445,7 @@ static void mem_avoid_init(unsigned long input, unsigned long input_size,
/* We don't need to set a mapping for setup_data. */

/* Mark the memmap regions we need to avoid */
- handle_mem_memmap();
+ handle_mem_filter();

#ifdef CONFIG_X86_VERBOSE_BOOTUP
/* Make sure video RAM can be used. */
--
2.14.3



2018-01-15 14:09:27

by Chao Fan

[permalink] [raw]
Subject: [PATCH v6 3/5] kaslr: disable memory mirror feature when movable_node

In kernel code, if movable_node specified, it will skip the mirror
feature. So we should also skip mirror feature in kaslr.

Signed-off-by: Chao Fan <[email protected]>
---
arch/x86/boot/compressed/kaslr.c | 7 +++++++
1 file changed, 7 insertions(+)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 38816d3f8865..5615f26364f9 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -646,6 +646,7 @@ static bool
process_efi_entries(unsigned long minimum, unsigned long image_size)
{
struct efi_info *e = &boot_params->efi_info;
+ char *args = (char *)get_cmd_line_ptr();
bool efi_mirror_found = false;
struct mem_vector region;
efi_memory_desc_t *md;
@@ -679,6 +680,12 @@ process_efi_entries(unsigned long minimum, unsigned long image_size)
}
}

+#ifdef CONFIG_MEMORY_HOTPLUG
+ /* Skip memory mirror if movabale_node or immovable_mem specified */
+ if (strstr(args, "movable_node"))
+ efi_mirror_found = false;
+#endif
+
for (i = 0; i < nr_desc; i++) {
md = efi_early_memdesc_ptr(pmap, e->efi_memdesc_size, i);

--
2.14.3



2018-01-15 22:40:50

by Randy Dunlap

[permalink] [raw]
Subject: Re: [PATCH v6 1/5] kaslr: add kaslr_mem=nn[KMG]@ss[KMG] to specify extracting memory

On 01/15/2018 04:40 AM, Chao Fan wrote:
> In current code, kaslr only has a method to avoid some memory regions,
> but no method to specify the regions for kaslr to extract. So kaslr
> may choose the wrong position sometimes, which will cause some other
> features fail.
>
> Here is a problem that kaslr may choose the memory region in movable
> nodes to extract kernel, which will make the nodes can't be hot-removed.
> To solve it, we can specify the memory region in immovable node.
> Create "kaslr_mem=" to store the regions in immovable nodes, where should
> be chosen by kaslr.
>
> Also change the "handle_mem_memmap" to "handle_mem_filter", since
> it will not only handle memmap parameter now.

Hi,

Are any of the kernel command-line parameters documented anywhere?

Thanks.

> Multiple regions can be specified, comma delimited.
> Considering the usage of memory, only support for 4 regions.
> 4 regions contains 2 nodes at least, enough for kernel to extract.
>
> Signed-off-by: Chao Fan <[email protected]>
> ---
> arch/x86/boot/compressed/kaslr.c | 73 ++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 70 insertions(+), 3 deletions(-)


--
~Randy

2018-01-16 00:43:53

by Baoquan He

[permalink] [raw]
Subject: Re: [PATCH v6 5/5] kaslr: add kaslr_mem=nn[KMG]!ss[KMG] to avoid memory regions

On 01/15/18 at 08:49pm, Chao Fan wrote:
> Hi Luiz,
>
> I don't know if this patch is OK for you.
> Of coure you can only use kaslr_mem=nn@ss to solve the 1G huge page
> issue. Because we know the region [0,1G] is not suitable for 1G huge
> page, so you can specify ksalr_mem=1G@0 of kaslr_mem=1G to solve
> your problem. But the regions may be too slow and is not good
> for the randomness.

I guess you want to say:

"Because we know the region [0,1G] is not suitable for 1G huge page, so
you can specify ksalr_mem=1G@0 or kaslr_mem=1G to solve your problem.
But the region may be too small and is not good for the randomness."

Hi Luiz,

For hugetlb issue, we can always suggest users adding "kaslr_mem=1G" to
kernel cmdline on KVM. Surely if users are very familiar with system
memory layout, they can specify "kaslr_mem=1G, kaslr_mem=1G@2G
kaslr_mem=1G@4G" for better kernel text KASLR. The "kaslr_mem=1G"
suggestion can be documented for redhat kvm usage. Any thought or
suggestion?

Thanks
Baoquan

>
> So as Kess said, I put the regions suitable for 1G huge page to
> mem_avoid, you can use kaslr_mem=1G!1G to solve the problem in your
> email.
>
> Thanks,
> Chao Fan
>
> On Mon, Jan 15, 2018 at 08:40:16PM +0800, Chao Fan wrote:
> >In current code, kaslr choose the only suitable memory region for 1G
> >huge page, so the no suitable region for 1G huge page. So add this
> >feature to store these regions.
> >
> >Of coure, we can use memmap= to do this job. But memmap will be handled
> >in the later code, but kaslr_mem= only works in this period.
> >
> >It can help users to avoid more memory regions, not only the 1G huge
> >huge page issue.
> >
> >Signed-off-by: Chao Fan <[email protected]>
> >---
> > arch/x86/boot/compressed/kaslr.c | 56 +++++++++++++++++++++++++++++++++++-----
> > 1 file changed, 50 insertions(+), 6 deletions(-)
> >
> >diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
> >index fc531fa1f10c..c71189cf8d56 100644
> >--- a/arch/x86/boot/compressed/kaslr.c
> >+++ b/arch/x86/boot/compressed/kaslr.c
> >@@ -95,6 +95,18 @@ static bool memmap_too_large;
> > /* Store memory limit specified by "mem=nn[KMG]" or "memmap=nn[KMG]" */
> > unsigned long long mem_limit = ULLONG_MAX;
> >
> >+/*
> >+ * Only supporting at most 4 unusable memory regions for
> >+ * "kaslr_mem=nn[KMG]!ss[KMG]"
> >+ */
> >+#define MAX_KASLR_MEM_AVOID 4
> >+
> >+static bool kaslr_mem_avoid_too_large;
> >+
> >+enum kaslr_mem_type {
> >+ CMD_MEM_USABLE = 1,
> >+ CMD_MEM_AVOID,
> >+};
> >
> > enum mem_avoid_index {
> > MEM_AVOID_ZO_RANGE = 0,
> >@@ -103,6 +115,8 @@ enum mem_avoid_index {
> > MEM_AVOID_BOOTPARAMS,
> > MEM_AVOID_MEMMAP_BEGIN,
> > MEM_AVOID_MEMMAP_END = MEM_AVOID_MEMMAP_BEGIN + MAX_MEMMAP_REGIONS - 1,
> >+ MEM_AVOID_KASLR_MEM_BEGIN,
> >+ MEM_AVOID_KASLR_MEM_END = MEM_AVOID_KASLR_MEM_BEGIN + MAX_KASLR_MEM_AVOID - 1,
> > MEM_AVOID_MAX,
> > };
> >
> >@@ -217,7 +231,8 @@ static void mem_avoid_memmap(char *str)
> >
> > static int parse_kaslr_mem(char *p,
> > unsigned long long *start,
> >- unsigned long long *size)
> >+ unsigned long long *size,
> >+ int *cmd_type)
> > {
> > char *oldp;
> >
> >@@ -230,8 +245,13 @@ static int parse_kaslr_mem(char *p,
> > return -EINVAL;
> >
> > switch (*p) {
> >+ case '!' :
> >+ *start = memparse(p + 1, &p);
> >+ *cmd_type = CMD_MEM_AVOID;
> >+ return 0;
> > case '@':
> > *start = memparse(p + 1, &p);
> >+ *cmd_type = CMD_MEM_USABLE;
> > return 0;
> > default:
> > /*
> >@@ -240,6 +260,7 @@ static int parse_kaslr_mem(char *p,
> > * the region starts from 0.
> > */
> > *start = 0;
> >+ *cmd_type = CMD_MEM_USABLE;
> > return 0;
> > }
> >
> >@@ -248,26 +269,44 @@ static int parse_kaslr_mem(char *p,
> >
> > static void parse_kaslr_mem_regions(char *str)
> > {
> >- static int i;
> >+ static int i = 0, j = 0;
> >+ int cmd_type = 0;
> >
> > while (str && (i < MAX_KASLR_MEM_USABLE)) {
> > int rc;
> > unsigned long long start, size;
> > char *k = strchr(str, ',');
> >
> >+ if (i >= MAX_KASLR_MEM_USABLE && j >= MAX_KASLR_MEM_AVOID)
> >+ break;
> >+
> > if (k)
> > *k++ = 0;
> >
> >- rc = parse_kaslr_mem(str, &start, &size);
> >+ rc = parse_kaslr_mem(str, &start, &size, &cmd_type);
> > if (rc < 0)
> > break;
> > str = k;
> >
> >- mem_usable[i].start = start;
> >- mem_usable[i].size = size;
> >- i++;
> >+ if (cmd_type == CMD_MEM_USABLE) {
> >+ if (i >= MAX_KASLR_MEM_USABLE)
> >+ continue;
> >+ mem_usable[i].start = start;
> >+ mem_usable[i].size = size;
> >+ i++;
> >+ } else if (cmd_type == CMD_MEM_AVOID) {
> >+ if (j >= MAX_KASLR_MEM_AVOID)
> >+ continue;
> >+ mem_avoid[MEM_AVOID_KASLR_MEM_BEGIN + j].start = start;
> >+ mem_avoid[MEM_AVOID_KASLR_MEM_BEGIN + j].size = size;
> >+ j++;
> >+ }
> > }
> > num_usable_region = i;
> >+
> >+ /* More than 4 kaslr_mem avoid, fail kaslr */
> >+ if ((j >= MAX_KASLR_MEM_AVOID) && str)
> >+ kaslr_mem_avoid_too_large = true;
> > }
> >
> > static int handle_mem_filter(void)
> >@@ -799,6 +838,11 @@ static unsigned long find_random_phys_addr(unsigned long minimum,
> > return 0;
> > }
> >
> >+ /* Check if we had too many kaslr_mem avoid. */
> >+ if (kaslr_mem_avoid_too_large) {
> >+ debug_putstr("Aborted memory entries scan (more than 4 kaslr_mem avoid args)!\n");
> >+ return 0;
> >+ }
> > /* Make sure minimum is aligned. */
> > minimum = ALIGN(minimum, CONFIG_PHYSICAL_ALIGN);
> >
> >--
> >2.14.3
> >
>
>

2018-01-16 01:17:05

by Chao Fan

[permalink] [raw]
Subject: Re: [PATCH v6 1/5] kaslr: add kaslr_mem=nn[KMG]@ss[KMG] to specify extracting memory

On Mon, Jan 15, 2018 at 02:40:35PM -0800, Randy Dunlap wrote:
>On 01/15/2018 04:40 AM, Chao Fan wrote:
>> In current code, kaslr only has a method to avoid some memory regions,
>> but no method to specify the regions for kaslr to extract. So kaslr
>> may choose the wrong position sometimes, which will cause some other
>> features fail.
>>
>> Here is a problem that kaslr may choose the memory region in movable
>> nodes to extract kernel, which will make the nodes can't be hot-removed.
>> To solve it, we can specify the memory region in immovable node.
>> Create "kaslr_mem=" to store the regions in immovable nodes, where should
>> be chosen by kaslr.
>>
>> Also change the "handle_mem_memmap" to "handle_mem_filter", since
>> it will not only handle memmap parameter now.
>
>Hi,
>
>Are any of the kernel command-line parameters documented anywhere?

Hi,

Sorry for that, not yet.
Because the patchset has been discussed in mailing list for a long time,
and changed for many times, I want to add the document after this
version has been merged or ACKed.

Thanks,
Chao Fan

>
>Thanks.
>
>> Multiple regions can be specified, comma delimited.
>> Considering the usage of memory, only support for 4 regions.
>> 4 regions contains 2 nodes at least, enough for kernel to extract.
>>
>> Signed-off-by: Chao Fan <[email protected]>
>> ---
>> arch/x86/boot/compressed/kaslr.c | 73 ++++++++++++++++++++++++++++++++++++++--
>> 1 file changed, 70 insertions(+), 3 deletions(-)
>
>
>--
>~Randy
>
>


2018-01-16 01:37:30

by Chao Fan

[permalink] [raw]
Subject: Re: [PATCH v6 5/5] kaslr: add kaslr_mem=nn[KMG]!ss[KMG] to avoid memory regions

On Tue, Jan 16, 2018 at 08:43:20AM +0800, Baoquan He wrote:
>On 01/15/18 at 08:49pm, Chao Fan wrote:
>> Hi Luiz,
>>
>> I don't know if this patch is OK for you.
>> Of coure you can only use kaslr_mem=nn@ss to solve the 1G huge page
>> issue. Because we know the region [0,1G] is not suitable for 1G huge
>> page, so you can specify ksalr_mem=1G@0 of kaslr_mem=1G to solve
>> your problem. But the regions may be too slow and is not good
>> for the randomness.
>
>I guess you want to say:
>
>"Because we know the region [0,1G] is not suitable for 1G huge page, so
>you can specify ksalr_mem=1G@0 or kaslr_mem=1G to solve your problem.
>But the region may be too small and is not good for the randomness."
>
>Hi Luiz,
>
>For hugetlb issue, we can always suggest users adding "kaslr_mem=1G" to
>kernel cmdline on KVM. Surely if users are very familiar with system
>memory layout, they can specify "kaslr_mem=1G, kaslr_mem=1G@2G
>kaslr_mem=1G@4G" for better kernel text KASLR. The "kaslr_mem=1G"
>suggestion can be documented for redhat kvm usage. Any thought or
>suggestion?
>
>Thanks
>Baoquan
>

Thanks for your explaination.

Thanks,
Chao Fan

>>
>> So as Kess said, I put the regions suitable for 1G huge page to
>> mem_avoid, you can use kaslr_mem=1G!1G to solve the problem in your
>> email.
>>
>> Thanks,
>> Chao Fan
>>
>> On Mon, Jan 15, 2018 at 08:40:16PM +0800, Chao Fan wrote:
>> >In current code, kaslr choose the only suitable memory region for 1G
>> >huge page, so the no suitable region for 1G huge page. So add this
>> >feature to store these regions.
>> >
>> >Of coure, we can use memmap= to do this job. But memmap will be handled
>> >in the later code, but kaslr_mem= only works in this period.
>> >
>> >It can help users to avoid more memory regions, not only the 1G huge
>> >huge page issue.
>> >
>> >Signed-off-by: Chao Fan <[email protected]>
>> >---
>> > arch/x86/boot/compressed/kaslr.c | 56 +++++++++++++++++++++++++++++++++++-----
>> > 1 file changed, 50 insertions(+), 6 deletions(-)
>> >
>> >diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
>> >index fc531fa1f10c..c71189cf8d56 100644
>> >--- a/arch/x86/boot/compressed/kaslr.c
>> >+++ b/arch/x86/boot/compressed/kaslr.c
>> >@@ -95,6 +95,18 @@ static bool memmap_too_large;
>> > /* Store memory limit specified by "mem=nn[KMG]" or "memmap=nn[KMG]" */
>> > unsigned long long mem_limit = ULLONG_MAX;
>> >
>> >+/*
>> >+ * Only supporting at most 4 unusable memory regions for
>> >+ * "kaslr_mem=nn[KMG]!ss[KMG]"
>> >+ */
>> >+#define MAX_KASLR_MEM_AVOID 4
>> >+
>> >+static bool kaslr_mem_avoid_too_large;
>> >+
>> >+enum kaslr_mem_type {
>> >+ CMD_MEM_USABLE = 1,
>> >+ CMD_MEM_AVOID,
>> >+};
>> >
>> > enum mem_avoid_index {
>> > MEM_AVOID_ZO_RANGE = 0,
>> >@@ -103,6 +115,8 @@ enum mem_avoid_index {
>> > MEM_AVOID_BOOTPARAMS,
>> > MEM_AVOID_MEMMAP_BEGIN,
>> > MEM_AVOID_MEMMAP_END = MEM_AVOID_MEMMAP_BEGIN + MAX_MEMMAP_REGIONS - 1,
>> >+ MEM_AVOID_KASLR_MEM_BEGIN,
>> >+ MEM_AVOID_KASLR_MEM_END = MEM_AVOID_KASLR_MEM_BEGIN + MAX_KASLR_MEM_AVOID - 1,
>> > MEM_AVOID_MAX,
>> > };
>> >
>> >@@ -217,7 +231,8 @@ static void mem_avoid_memmap(char *str)
>> >
>> > static int parse_kaslr_mem(char *p,
>> > unsigned long long *start,
>> >- unsigned long long *size)
>> >+ unsigned long long *size,
>> >+ int *cmd_type)
>> > {
>> > char *oldp;
>> >
>> >@@ -230,8 +245,13 @@ static int parse_kaslr_mem(char *p,
>> > return -EINVAL;
>> >
>> > switch (*p) {
>> >+ case '!' :
>> >+ *start = memparse(p + 1, &p);
>> >+ *cmd_type = CMD_MEM_AVOID;
>> >+ return 0;
>> > case '@':
>> > *start = memparse(p + 1, &p);
>> >+ *cmd_type = CMD_MEM_USABLE;
>> > return 0;
>> > default:
>> > /*
>> >@@ -240,6 +260,7 @@ static int parse_kaslr_mem(char *p,
>> > * the region starts from 0.
>> > */
>> > *start = 0;
>> >+ *cmd_type = CMD_MEM_USABLE;
>> > return 0;
>> > }
>> >
>> >@@ -248,26 +269,44 @@ static int parse_kaslr_mem(char *p,
>> >
>> > static void parse_kaslr_mem_regions(char *str)
>> > {
>> >- static int i;
>> >+ static int i = 0, j = 0;
>> >+ int cmd_type = 0;
>> >
>> > while (str && (i < MAX_KASLR_MEM_USABLE)) {
>> > int rc;
>> > unsigned long long start, size;
>> > char *k = strchr(str, ',');
>> >
>> >+ if (i >= MAX_KASLR_MEM_USABLE && j >= MAX_KASLR_MEM_AVOID)
>> >+ break;
>> >+
>> > if (k)
>> > *k++ = 0;
>> >
>> >- rc = parse_kaslr_mem(str, &start, &size);
>> >+ rc = parse_kaslr_mem(str, &start, &size, &cmd_type);
>> > if (rc < 0)
>> > break;
>> > str = k;
>> >
>> >- mem_usable[i].start = start;
>> >- mem_usable[i].size = size;
>> >- i++;
>> >+ if (cmd_type == CMD_MEM_USABLE) {
>> >+ if (i >= MAX_KASLR_MEM_USABLE)
>> >+ continue;
>> >+ mem_usable[i].start = start;
>> >+ mem_usable[i].size = size;
>> >+ i++;
>> >+ } else if (cmd_type == CMD_MEM_AVOID) {
>> >+ if (j >= MAX_KASLR_MEM_AVOID)
>> >+ continue;
>> >+ mem_avoid[MEM_AVOID_KASLR_MEM_BEGIN + j].start = start;
>> >+ mem_avoid[MEM_AVOID_KASLR_MEM_BEGIN + j].size = size;
>> >+ j++;
>> >+ }
>> > }
>> > num_usable_region = i;
>> >+
>> >+ /* More than 4 kaslr_mem avoid, fail kaslr */
>> >+ if ((j >= MAX_KASLR_MEM_AVOID) && str)
>> >+ kaslr_mem_avoid_too_large = true;
>> > }
>> >
>> > static int handle_mem_filter(void)
>> >@@ -799,6 +838,11 @@ static unsigned long find_random_phys_addr(unsigned long minimum,
>> > return 0;
>> > }
>> >
>> >+ /* Check if we had too many kaslr_mem avoid. */
>> >+ if (kaslr_mem_avoid_too_large) {
>> >+ debug_putstr("Aborted memory entries scan (more than 4 kaslr_mem avoid args)!\n");
>> >+ return 0;
>> >+ }
>> > /* Make sure minimum is aligned. */
>> > minimum = ALIGN(minimum, CONFIG_PHYSICAL_ALIGN);
>> >
>> >--
>> >2.14.3
>> >
>>
>>
>
>


2018-01-16 16:34:35

by Luiz Capitulino

[permalink] [raw]
Subject: Re: [PATCH v6 5/5] kaslr: add kaslr_mem=nn[KMG]!ss[KMG] to avoid memory regions

On Tue, 16 Jan 2018 08:43:20 +0800
Baoquan He <[email protected]> wrote:

> On 01/15/18 at 08:49pm, Chao Fan wrote:
> > Hi Luiz,
> >
> > I don't know if this patch is OK for you.
> > Of coure you can only use kaslr_mem=nn@ss to solve the 1G huge page
> > issue. Because we know the region [0,1G] is not suitable for 1G huge
> > page, so you can specify ksalr_mem=1G@0 of kaslr_mem=1G to solve
> > your problem. But the regions may be too slow and is not good
> > for the randomness.
>
> I guess you want to say:
>
> "Because we know the region [0,1G] is not suitable for 1G huge page, so
> you can specify ksalr_mem=1G@0 or kaslr_mem=1G to solve your problem.
> But the region may be too small and is not good for the randomness."
>
> Hi Luiz,
>
> For hugetlb issue, we can always suggest users adding "kaslr_mem=1G" to
> kernel cmdline on KVM. Surely if users are very familiar with system
> memory layout, they can specify "kaslr_mem=1G, kaslr_mem=1G@2G
> kaslr_mem=1G@4G" for better kernel text KASLR. The "kaslr_mem=1G"
> suggestion can be documented for redhat kvm usage. Any thought or
> suggestion?

I have to test it, but I'll only have time in one or two days.

Btw, I think this series is a very good improvement over the current
situation where the only option to solve the 1GB page problem is
to disable kaslr entirely.

However, I've discussed this problem with a few people and they think
that it may be possible to change KASLR to extract the kernel to
an already fragmented 1GB region, so that it doesn't split an otherwise
good 1GB page. The advantage of this approach is that it's automatic
and doesn't require the user to know the memory layout.

>
> Thanks
> Baoquan
>
> >
> > So as Kess said, I put the regions suitable for 1G huge page to
> > mem_avoid, you can use kaslr_mem=1G!1G to solve the problem in your
> > email.
> >
> > Thanks,
> > Chao Fan
> >
> > On Mon, Jan 15, 2018 at 08:40:16PM +0800, Chao Fan wrote:
> > >In current code, kaslr choose the only suitable memory region for 1G
> > >huge page, so the no suitable region for 1G huge page. So add this
> > >feature to store these regions.
> > >
> > >Of coure, we can use memmap= to do this job. But memmap will be handled
> > >in the later code, but kaslr_mem= only works in this period.
> > >
> > >It can help users to avoid more memory regions, not only the 1G huge
> > >huge page issue.
> > >
> > >Signed-off-by: Chao Fan <[email protected]>
> > >---
> > > arch/x86/boot/compressed/kaslr.c | 56 +++++++++++++++++++++++++++++++++++-----
> > > 1 file changed, 50 insertions(+), 6 deletions(-)
> > >
> > >diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
> > >index fc531fa1f10c..c71189cf8d56 100644
> > >--- a/arch/x86/boot/compressed/kaslr.c
> > >+++ b/arch/x86/boot/compressed/kaslr.c
> > >@@ -95,6 +95,18 @@ static bool memmap_too_large;
> > > /* Store memory limit specified by "mem=nn[KMG]" or "memmap=nn[KMG]" */
> > > unsigned long long mem_limit = ULLONG_MAX;
> > >
> > >+/*
> > >+ * Only supporting at most 4 unusable memory regions for
> > >+ * "kaslr_mem=nn[KMG]!ss[KMG]"
> > >+ */
> > >+#define MAX_KASLR_MEM_AVOID 4
> > >+
> > >+static bool kaslr_mem_avoid_too_large;
> > >+
> > >+enum kaslr_mem_type {
> > >+ CMD_MEM_USABLE = 1,
> > >+ CMD_MEM_AVOID,
> > >+};
> > >
> > > enum mem_avoid_index {
> > > MEM_AVOID_ZO_RANGE = 0,
> > >@@ -103,6 +115,8 @@ enum mem_avoid_index {
> > > MEM_AVOID_BOOTPARAMS,
> > > MEM_AVOID_MEMMAP_BEGIN,
> > > MEM_AVOID_MEMMAP_END = MEM_AVOID_MEMMAP_BEGIN + MAX_MEMMAP_REGIONS - 1,
> > >+ MEM_AVOID_KASLR_MEM_BEGIN,
> > >+ MEM_AVOID_KASLR_MEM_END = MEM_AVOID_KASLR_MEM_BEGIN + MAX_KASLR_MEM_AVOID - 1,
> > > MEM_AVOID_MAX,
> > > };
> > >
> > >@@ -217,7 +231,8 @@ static void mem_avoid_memmap(char *str)
> > >
> > > static int parse_kaslr_mem(char *p,
> > > unsigned long long *start,
> > >- unsigned long long *size)
> > >+ unsigned long long *size,
> > >+ int *cmd_type)
> > > {
> > > char *oldp;
> > >
> > >@@ -230,8 +245,13 @@ static int parse_kaslr_mem(char *p,
> > > return -EINVAL;
> > >
> > > switch (*p) {
> > >+ case '!' :
> > >+ *start = memparse(p + 1, &p);
> > >+ *cmd_type = CMD_MEM_AVOID;
> > >+ return 0;
> > > case '@':
> > > *start = memparse(p + 1, &p);
> > >+ *cmd_type = CMD_MEM_USABLE;
> > > return 0;
> > > default:
> > > /*
> > >@@ -240,6 +260,7 @@ static int parse_kaslr_mem(char *p,
> > > * the region starts from 0.
> > > */
> > > *start = 0;
> > >+ *cmd_type = CMD_MEM_USABLE;
> > > return 0;
> > > }
> > >
> > >@@ -248,26 +269,44 @@ static int parse_kaslr_mem(char *p,
> > >
> > > static void parse_kaslr_mem_regions(char *str)
> > > {
> > >- static int i;
> > >+ static int i = 0, j = 0;
> > >+ int cmd_type = 0;
> > >
> > > while (str && (i < MAX_KASLR_MEM_USABLE)) {
> > > int rc;
> > > unsigned long long start, size;
> > > char *k = strchr(str, ',');
> > >
> > >+ if (i >= MAX_KASLR_MEM_USABLE && j >= MAX_KASLR_MEM_AVOID)
> > >+ break;
> > >+
> > > if (k)
> > > *k++ = 0;
> > >
> > >- rc = parse_kaslr_mem(str, &start, &size);
> > >+ rc = parse_kaslr_mem(str, &start, &size, &cmd_type);
> > > if (rc < 0)
> > > break;
> > > str = k;
> > >
> > >- mem_usable[i].start = start;
> > >- mem_usable[i].size = size;
> > >- i++;
> > >+ if (cmd_type == CMD_MEM_USABLE) {
> > >+ if (i >= MAX_KASLR_MEM_USABLE)
> > >+ continue;
> > >+ mem_usable[i].start = start;
> > >+ mem_usable[i].size = size;
> > >+ i++;
> > >+ } else if (cmd_type == CMD_MEM_AVOID) {
> > >+ if (j >= MAX_KASLR_MEM_AVOID)
> > >+ continue;
> > >+ mem_avoid[MEM_AVOID_KASLR_MEM_BEGIN + j].start = start;
> > >+ mem_avoid[MEM_AVOID_KASLR_MEM_BEGIN + j].size = size;
> > >+ j++;
> > >+ }
> > > }
> > > num_usable_region = i;
> > >+
> > >+ /* More than 4 kaslr_mem avoid, fail kaslr */
> > >+ if ((j >= MAX_KASLR_MEM_AVOID) && str)
> > >+ kaslr_mem_avoid_too_large = true;
> > > }
> > >
> > > static int handle_mem_filter(void)
> > >@@ -799,6 +838,11 @@ static unsigned long find_random_phys_addr(unsigned long minimum,
> > > return 0;
> > > }
> > >
> > >+ /* Check if we had too many kaslr_mem avoid. */
> > >+ if (kaslr_mem_avoid_too_large) {
> > >+ debug_putstr("Aborted memory entries scan (more than 4 kaslr_mem avoid args)!\n");
> > >+ return 0;
> > >+ }
> > > /* Make sure minimum is aligned. */
> > > minimum = ALIGN(minimum, CONFIG_PHYSICAL_ALIGN);
> > >
> > >--
> > >2.14.3
> > >
> >
> >
>

2018-01-17 03:53:29

by Baoquan He

[permalink] [raw]
Subject: Re: [PATCH v6 5/5] kaslr: add kaslr_mem=nn[KMG]!ss[KMG] to avoid memory regions

On 01/16/18 at 11:34am, Luiz Capitulino wrote:
> On Tue, 16 Jan 2018 08:43:20 +0800
> Baoquan He <[email protected]> wrote:
>
> > On 01/15/18 at 08:49pm, Chao Fan wrote:
> > > Hi Luiz,
> > >
> > > I don't know if this patch is OK for you.
> > > Of coure you can only use kaslr_mem=nn@ss to solve the 1G huge page
> > > issue. Because we know the region [0,1G] is not suitable for 1G huge
> > > page, so you can specify ksalr_mem=1G@0 of kaslr_mem=1G to solve
> > > your problem. But the regions may be too slow and is not good
> > > for the randomness.
> >
> > I guess you want to say:
> >
> > "Because we know the region [0,1G] is not suitable for 1G huge page, so
> > you can specify ksalr_mem=1G@0 or kaslr_mem=1G to solve your problem.
> > But the region may be too small and is not good for the randomness."
> >
> > Hi Luiz,
> >
> > For hugetlb issue, we can always suggest users adding "kaslr_mem=1G" to
> > kernel cmdline on KVM. Surely if users are very familiar with system
> > memory layout, they can specify "kaslr_mem=1G, kaslr_mem=1G@2G
> > kaslr_mem=1G@4G" for better kernel text KASLR. The "kaslr_mem=1G"
> > suggestion can be documented for redhat kvm usage. Any thought or
> > suggestion?
>
> I have to test it, but I'll only have time in one or two days.
>
> Btw, I think this series is a very good improvement over the current
> situation where the only option to solve the 1GB page problem is
> to disable kaslr entirely.
>
> However, I've discussed this problem with a few people and they think
> that it may be possible to change KASLR to extract the kernel to
> an already fragmented 1GB region, so that it doesn't split an otherwise
> good 1GB page. The advantage of this approach is that it's automatic
> and doesn't require the user to know the memory layout.

Hmm, I got what you mean. If we do in KALSR code, need parse kernel
cmdline to get hugepage information, and need find out those
unfragmented 1GB page, and this is only needed on KVM guest with small
memory, in fact with 4G memory. This is a cornor case. I am not sure if
it's good to only randomize kernel on the fragmented 1GB region, and
only under 4G region. I am not sure if it's good to do like that.

Thanks
Baoquan

> >
> > >
> > > So as Kess said, I put the regions suitable for 1G huge page to
> > > mem_avoid, you can use kaslr_mem=1G!1G to solve the problem in your
> > > email.
> > >
> > > Thanks,
> > > Chao Fan
> > >
> > > On Mon, Jan 15, 2018 at 08:40:16PM +0800, Chao Fan wrote:
> > > >In current code, kaslr choose the only suitable memory region for 1G
> > > >huge page, so the no suitable region for 1G huge page. So add this
> > > >feature to store these regions.
> > > >
> > > >Of coure, we can use memmap= to do this job. But memmap will be handled
> > > >in the later code, but kaslr_mem= only works in this period.
> > > >
> > > >It can help users to avoid more memory regions, not only the 1G huge
> > > >huge page issue.
> > > >
> > > >Signed-off-by: Chao Fan <[email protected]>
> > > >---
> > > > arch/x86/boot/compressed/kaslr.c | 56 +++++++++++++++++++++++++++++++++++-----
> > > > 1 file changed, 50 insertions(+), 6 deletions(-)
> > > >
> > > >diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
> > > >index fc531fa1f10c..c71189cf8d56 100644
> > > >--- a/arch/x86/boot/compressed/kaslr.c
> > > >+++ b/arch/x86/boot/compressed/kaslr.c
> > > >@@ -95,6 +95,18 @@ static bool memmap_too_large;
> > > > /* Store memory limit specified by "mem=nn[KMG]" or "memmap=nn[KMG]" */
> > > > unsigned long long mem_limit = ULLONG_MAX;
> > > >
> > > >+/*
> > > >+ * Only supporting at most 4 unusable memory regions for
> > > >+ * "kaslr_mem=nn[KMG]!ss[KMG]"
> > > >+ */
> > > >+#define MAX_KASLR_MEM_AVOID 4
> > > >+
> > > >+static bool kaslr_mem_avoid_too_large;
> > > >+
> > > >+enum kaslr_mem_type {
> > > >+ CMD_MEM_USABLE = 1,
> > > >+ CMD_MEM_AVOID,
> > > >+};
> > > >
> > > > enum mem_avoid_index {
> > > > MEM_AVOID_ZO_RANGE = 0,
> > > >@@ -103,6 +115,8 @@ enum mem_avoid_index {
> > > > MEM_AVOID_BOOTPARAMS,
> > > > MEM_AVOID_MEMMAP_BEGIN,
> > > > MEM_AVOID_MEMMAP_END = MEM_AVOID_MEMMAP_BEGIN + MAX_MEMMAP_REGIONS - 1,
> > > >+ MEM_AVOID_KASLR_MEM_BEGIN,
> > > >+ MEM_AVOID_KASLR_MEM_END = MEM_AVOID_KASLR_MEM_BEGIN + MAX_KASLR_MEM_AVOID - 1,
> > > > MEM_AVOID_MAX,
> > > > };
> > > >
> > > >@@ -217,7 +231,8 @@ static void mem_avoid_memmap(char *str)
> > > >
> > > > static int parse_kaslr_mem(char *p,
> > > > unsigned long long *start,
> > > >- unsigned long long *size)
> > > >+ unsigned long long *size,
> > > >+ int *cmd_type)
> > > > {
> > > > char *oldp;
> > > >
> > > >@@ -230,8 +245,13 @@ static int parse_kaslr_mem(char *p,
> > > > return -EINVAL;
> > > >
> > > > switch (*p) {
> > > >+ case '!' :
> > > >+ *start = memparse(p + 1, &p);
> > > >+ *cmd_type = CMD_MEM_AVOID;
> > > >+ return 0;
> > > > case '@':
> > > > *start = memparse(p + 1, &p);
> > > >+ *cmd_type = CMD_MEM_USABLE;
> > > > return 0;
> > > > default:
> > > > /*
> > > >@@ -240,6 +260,7 @@ static int parse_kaslr_mem(char *p,
> > > > * the region starts from 0.
> > > > */
> > > > *start = 0;
> > > >+ *cmd_type = CMD_MEM_USABLE;
> > > > return 0;
> > > > }
> > > >
> > > >@@ -248,26 +269,44 @@ static int parse_kaslr_mem(char *p,
> > > >
> > > > static void parse_kaslr_mem_regions(char *str)
> > > > {
> > > >- static int i;
> > > >+ static int i = 0, j = 0;
> > > >+ int cmd_type = 0;
> > > >
> > > > while (str && (i < MAX_KASLR_MEM_USABLE)) {
> > > > int rc;
> > > > unsigned long long start, size;
> > > > char *k = strchr(str, ',');
> > > >
> > > >+ if (i >= MAX_KASLR_MEM_USABLE && j >= MAX_KASLR_MEM_AVOID)
> > > >+ break;
> > > >+
> > > > if (k)
> > > > *k++ = 0;
> > > >
> > > >- rc = parse_kaslr_mem(str, &start, &size);
> > > >+ rc = parse_kaslr_mem(str, &start, &size, &cmd_type);
> > > > if (rc < 0)
> > > > break;
> > > > str = k;
> > > >
> > > >- mem_usable[i].start = start;
> > > >- mem_usable[i].size = size;
> > > >- i++;
> > > >+ if (cmd_type == CMD_MEM_USABLE) {
> > > >+ if (i >= MAX_KASLR_MEM_USABLE)
> > > >+ continue;
> > > >+ mem_usable[i].start = start;
> > > >+ mem_usable[i].size = size;
> > > >+ i++;
> > > >+ } else if (cmd_type == CMD_MEM_AVOID) {
> > > >+ if (j >= MAX_KASLR_MEM_AVOID)
> > > >+ continue;
> > > >+ mem_avoid[MEM_AVOID_KASLR_MEM_BEGIN + j].start = start;
> > > >+ mem_avoid[MEM_AVOID_KASLR_MEM_BEGIN + j].size = size;
> > > >+ j++;
> > > >+ }
> > > > }
> > > > num_usable_region = i;
> > > >+
> > > >+ /* More than 4 kaslr_mem avoid, fail kaslr */
> > > >+ if ((j >= MAX_KASLR_MEM_AVOID) && str)
> > > >+ kaslr_mem_avoid_too_large = true;
> > > > }
> > > >
> > > > static int handle_mem_filter(void)
> > > >@@ -799,6 +838,11 @@ static unsigned long find_random_phys_addr(unsigned long minimum,
> > > > return 0;
> > > > }
> > > >
> > > >+ /* Check if we had too many kaslr_mem avoid. */
> > > >+ if (kaslr_mem_avoid_too_large) {
> > > >+ debug_putstr("Aborted memory entries scan (more than 4 kaslr_mem avoid args)!\n");
> > > >+ return 0;
> > > >+ }
> > > > /* Make sure minimum is aligned. */
> > > > minimum = ALIGN(minimum, CONFIG_PHYSICAL_ALIGN);
> > > >
> > > >--
> > > >2.14.3
> > > >
> > >
> > >
> >
>

2018-01-17 05:41:13

by Chao Fan

[permalink] [raw]
Subject: Re: [PATCH v6 5/5] kaslr: add kaslr_mem=nn[KMG]!ss[KMG] to avoid memory regions

On Tue, Jan 16, 2018 at 11:34:23AM -0500, Luiz Capitulino wrote:
>On Tue, 16 Jan 2018 08:43:20 +0800
>Baoquan He <[email protected]> wrote:
>
>> On 01/15/18 at 08:49pm, Chao Fan wrote:
>> > Hi Luiz,
>> >
>> > I don't know if this patch is OK for you.
>> > Of coure you can only use kaslr_mem=nn@ss to solve the 1G huge page
>> > issue. Because we know the region [0,1G] is not suitable for 1G huge
>> > page, so you can specify ksalr_mem=1G@0 of kaslr_mem=1G to solve
>> > your problem. But the regions may be too slow and is not good
>> > for the randomness.
>>
>> I guess you want to say:
>>
>> "Because we know the region [0,1G] is not suitable for 1G huge page, so
>> you can specify ksalr_mem=1G@0 or kaslr_mem=1G to solve your problem.
>> But the region may be too small and is not good for the randomness."
>>
>> Hi Luiz,
>>
>> For hugetlb issue, we can always suggest users adding "kaslr_mem=1G" to
>> kernel cmdline on KVM. Surely if users are very familiar with system
>> memory layout, they can specify "kaslr_mem=1G, kaslr_mem=1G@2G
>> kaslr_mem=1G@4G" for better kernel text KASLR. The "kaslr_mem=1G"
>> suggestion can be documented for redhat kvm usage. Any thought or
>> suggestion?
>
>I have to test it, but I'll only have time in one or two days.
>
>Btw, I think this series is a very good improvement over the current
>situation where the only option to solve the 1GB page problem is
>to disable kaslr entirely.
>
>However, I've discussed this problem with a few people and they think
>that it may be possible to change KASLR to extract the kernel to
>an already fragmented 1GB region, so that it doesn't split an otherwise
>good 1GB page. The advantage of this approach is that it's automatic
>and doesn't require the user to know the memory layout.
>

Seems the kaslr_mem=nn@ss can solve your problem, so the new feature
in PATCH 5/5 is not needed now, I will post the new version without
this part.

Thanks,
Chao Fan

>>
>> Thanks
>> Baoquan
>>
>> >
>> > So as Kess said, I put the regions suitable for 1G huge page to
>> > mem_avoid, you can use kaslr_mem=1G!1G to solve the problem in your
>> > email.
>> >
>> > Thanks,
>> > Chao Fan
>> >
>> > On Mon, Jan 15, 2018 at 08:40:16PM +0800, Chao Fan wrote:
>> > >In current code, kaslr choose the only suitable memory region for 1G
>> > >huge page, so the no suitable region for 1G huge page. So add this
>> > >feature to store these regions.
>> > >
>> > >Of coure, we can use memmap= to do this job. But memmap will be handled
>> > >in the later code, but kaslr_mem= only works in this period.
>> > >
>> > >It can help users to avoid more memory regions, not only the 1G huge
>> > >huge page issue.
>> > >
>> > >Signed-off-by: Chao Fan <[email protected]>
>> > >---
>> > > arch/x86/boot/compressed/kaslr.c | 56 +++++++++++++++++++++++++++++++++++-----
>> > > 1 file changed, 50 insertions(+), 6 deletions(-)
>> > >
>> > >diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
>> > >index fc531fa1f10c..c71189cf8d56 100644
>> > >--- a/arch/x86/boot/compressed/kaslr.c
>> > >+++ b/arch/x86/boot/compressed/kaslr.c
>> > >@@ -95,6 +95,18 @@ static bool memmap_too_large;
>> > > /* Store memory limit specified by "mem=nn[KMG]" or "memmap=nn[KMG]" */
>> > > unsigned long long mem_limit = ULLONG_MAX;
>> > >
>> > >+/*
>> > >+ * Only supporting at most 4 unusable memory regions for
>> > >+ * "kaslr_mem=nn[KMG]!ss[KMG]"
>> > >+ */
>> > >+#define MAX_KASLR_MEM_AVOID 4
>> > >+
>> > >+static bool kaslr_mem_avoid_too_large;
>> > >+
>> > >+enum kaslr_mem_type {
>> > >+ CMD_MEM_USABLE = 1,
>> > >+ CMD_MEM_AVOID,
>> > >+};
>> > >
>> > > enum mem_avoid_index {
>> > > MEM_AVOID_ZO_RANGE = 0,
>> > >@@ -103,6 +115,8 @@ enum mem_avoid_index {
>> > > MEM_AVOID_BOOTPARAMS,
>> > > MEM_AVOID_MEMMAP_BEGIN,
>> > > MEM_AVOID_MEMMAP_END = MEM_AVOID_MEMMAP_BEGIN + MAX_MEMMAP_REGIONS - 1,
>> > >+ MEM_AVOID_KASLR_MEM_BEGIN,
>> > >+ MEM_AVOID_KASLR_MEM_END = MEM_AVOID_KASLR_MEM_BEGIN + MAX_KASLR_MEM_AVOID - 1,
>> > > MEM_AVOID_MAX,
>> > > };
>> > >
>> > >@@ -217,7 +231,8 @@ static void mem_avoid_memmap(char *str)
>> > >
>> > > static int parse_kaslr_mem(char *p,
>> > > unsigned long long *start,
>> > >- unsigned long long *size)
>> > >+ unsigned long long *size,
>> > >+ int *cmd_type)
>> > > {
>> > > char *oldp;
>> > >
>> > >@@ -230,8 +245,13 @@ static int parse_kaslr_mem(char *p,
>> > > return -EINVAL;
>> > >
>> > > switch (*p) {
>> > >+ case '!' :
>> > >+ *start = memparse(p + 1, &p);
>> > >+ *cmd_type = CMD_MEM_AVOID;
>> > >+ return 0;
>> > > case '@':
>> > > *start = memparse(p + 1, &p);
>> > >+ *cmd_type = CMD_MEM_USABLE;
>> > > return 0;
>> > > default:
>> > > /*
>> > >@@ -240,6 +260,7 @@ static int parse_kaslr_mem(char *p,
>> > > * the region starts from 0.
>> > > */
>> > > *start = 0;
>> > >+ *cmd_type = CMD_MEM_USABLE;
>> > > return 0;
>> > > }
>> > >
>> > >@@ -248,26 +269,44 @@ static int parse_kaslr_mem(char *p,
>> > >
>> > > static void parse_kaslr_mem_regions(char *str)
>> > > {
>> > >- static int i;
>> > >+ static int i = 0, j = 0;
>> > >+ int cmd_type = 0;
>> > >
>> > > while (str && (i < MAX_KASLR_MEM_USABLE)) {
>> > > int rc;
>> > > unsigned long long start, size;
>> > > char *k = strchr(str, ',');
>> > >
>> > >+ if (i >= MAX_KASLR_MEM_USABLE && j >= MAX_KASLR_MEM_AVOID)
>> > >+ break;
>> > >+
>> > > if (k)
>> > > *k++ = 0;
>> > >
>> > >- rc = parse_kaslr_mem(str, &start, &size);
>> > >+ rc = parse_kaslr_mem(str, &start, &size, &cmd_type);
>> > > if (rc < 0)
>> > > break;
>> > > str = k;
>> > >
>> > >- mem_usable[i].start = start;
>> > >- mem_usable[i].size = size;
>> > >- i++;
>> > >+ if (cmd_type == CMD_MEM_USABLE) {
>> > >+ if (i >= MAX_KASLR_MEM_USABLE)
>> > >+ continue;
>> > >+ mem_usable[i].start = start;
>> > >+ mem_usable[i].size = size;
>> > >+ i++;
>> > >+ } else if (cmd_type == CMD_MEM_AVOID) {
>> > >+ if (j >= MAX_KASLR_MEM_AVOID)
>> > >+ continue;
>> > >+ mem_avoid[MEM_AVOID_KASLR_MEM_BEGIN + j].start = start;
>> > >+ mem_avoid[MEM_AVOID_KASLR_MEM_BEGIN + j].size = size;
>> > >+ j++;
>> > >+ }
>> > > }
>> > > num_usable_region = i;
>> > >+
>> > >+ /* More than 4 kaslr_mem avoid, fail kaslr */
>> > >+ if ((j >= MAX_KASLR_MEM_AVOID) && str)
>> > >+ kaslr_mem_avoid_too_large = true;
>> > > }
>> > >
>> > > static int handle_mem_filter(void)
>> > >@@ -799,6 +838,11 @@ static unsigned long find_random_phys_addr(unsigned long minimum,
>> > > return 0;
>> > > }
>> > >
>> > >+ /* Check if we had too many kaslr_mem avoid. */
>> > >+ if (kaslr_mem_avoid_too_large) {
>> > >+ debug_putstr("Aborted memory entries scan (more than 4 kaslr_mem avoid args)!\n");
>> > >+ return 0;
>> > >+ }
>> > > /* Make sure minimum is aligned. */
>> > > minimum = ALIGN(minimum, CONFIG_PHYSICAL_ALIGN);
>> > >
>> > >--
>> > >2.14.3
>> > >
>> >
>> >
>>
>
>
>