2019-03-14 03:22:24

by pierre kuo

[permalink] [raw]
Subject: [PATCH v2 1/1] initrd: move initrd_start calculate within linear mapping range check

in the previous case, initrd_start and initrd_end can be successfully
returned either (base < memblock_start_of_DRAM()) or (base + size >
memblock_start_of_DRAM() + linear_region_size).

That means even linear mapping range check fail for initrd_start and
initrd_end, it still can get virtual address. Here we put
initrd_start/initrd_end to be calculated only when linear mapping check
pass.

Fixes: c756c592e442 ("arm64: Utilize phys_initrd_start/phys_initrd_size")
Reviewed-by: Steven Price <[email protected]>
Signed-off-by: pierre Kuo <[email protected]>
---
Changes in v2:
- add Fixes tag

arch/arm64/mm/init.c | 8 +++-----
1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 7205a9085b4d..1adf418de685 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -425,6 +425,9 @@ void __init arm64_memblock_init(void)
memblock_remove(base, size); /* clear MEMBLOCK_ flags */
memblock_add(base, size);
memblock_reserve(base, size);
+ /* the generic initrd code expects virtual addresses */
+ initrd_start = __phys_to_virt(phys_initrd_start);
+ initrd_end = initrd_start + phys_initrd_size;
}
}

@@ -450,11 +453,6 @@ void __init arm64_memblock_init(void)
* pagetables with memblock.
*/
memblock_reserve(__pa_symbol(_text), _end - _text);
- if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {
- /* the generic initrd code expects virtual addresses */
- initrd_start = __phys_to_virt(phys_initrd_start);
- initrd_end = initrd_start + phys_initrd_size;
- }

early_init_fdt_scan_reserved_mem();

--
2.17.1



2019-03-18 03:08:08

by pierre kuo

[permalink] [raw]
Subject: Re: [PATCH v2 1/1] initrd: move initrd_start calculate within linear mapping range check

Hi Steven, Catalin and Will:
>
> in the previous case, initrd_start and initrd_end can be successfully
> returned either (base < memblock_start_of_DRAM()) or (base + size >
> memblock_start_of_DRAM() + linear_region_size).
>
> That means even linear mapping range check fail for initrd_start and
> initrd_end, it still can get virtual address. Here we put
> initrd_start/initrd_end to be calculated only when linear mapping check
> pass.
>
> Fixes: c756c592e442 ("arm64: Utilize phys_initrd_start/phys_initrd_size")
> Reviewed-by: Steven Price <[email protected]>
> Signed-off-by: pierre Kuo <[email protected]>
> ---
> Changes in v2:
> - add Fixes tag
>

Would you mind to give some comment and suggestion for this v2 patch?
If there is anything that are not noticed, please let me know.

Sincerely appreciate ur kind help.

2019-03-19 15:32:20

by Catalin Marinas

[permalink] [raw]
Subject: Re: [PATCH v2 1/1] initrd: move initrd_start calculate within linear mapping range check

On Thu, Mar 14, 2019 at 11:20:47AM +0800, pierre Kuo wrote:
> in the previous case, initrd_start and initrd_end can be successfully
> returned either (base < memblock_start_of_DRAM()) or (base + size >
> memblock_start_of_DRAM() + linear_region_size).
>
> That means even linear mapping range check fail for initrd_start and
> initrd_end, it still can get virtual address. Here we put
> initrd_start/initrd_end to be calculated only when linear mapping check
> pass.
>
> Fixes: c756c592e442 ("arm64: Utilize phys_initrd_start/phys_initrd_size")

For future versions, please also cc the author of the original commit
you are fixing.

> Reviewed-by: Steven Price <[email protected]>
> Signed-off-by: pierre Kuo <[email protected]>
> ---
> Changes in v2:
> - add Fixes tag
>
> arch/arm64/mm/init.c | 8 +++-----
> 1 file changed, 3 insertions(+), 5 deletions(-)
>
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 7205a9085b4d..1adf418de685 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -425,6 +425,9 @@ void __init arm64_memblock_init(void)
> memblock_remove(base, size); /* clear MEMBLOCK_ flags */
> memblock_add(base, size);
> memblock_reserve(base, size);
> + /* the generic initrd code expects virtual addresses */
> + initrd_start = __phys_to_virt(phys_initrd_start);
> + initrd_end = initrd_start + phys_initrd_size;
> }
> }
>
> @@ -450,11 +453,6 @@ void __init arm64_memblock_init(void)
> * pagetables with memblock.
> */
> memblock_reserve(__pa_symbol(_text), _end - _text);
> - if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {
> - /* the generic initrd code expects virtual addresses */
> - initrd_start = __phys_to_virt(phys_initrd_start);
> - initrd_end = initrd_start + phys_initrd_size;
> - }

With CONFIG_RANDOMIZE_BASE we can get a further change to memstart_addr
after the place where you moved the initrd_{start,end} setting, which
would result in a different value for __phys_to_virt(phys_initrd_start).

--
Catalin

2019-03-31 15:15:55

by pierre kuo

[permalink] [raw]
Subject: Re: [PATCH v2 1/1] initrd: move initrd_start calculate within linear mapping range check

hi Catalin:

> On Thu, Mar 14, 2019 at 11:20:47AM +0800, pierre Kuo wrote:
> > in the previous case, initrd_start and initrd_end can be successfully
> > returned either (base < memblock_start_of_DRAM()) or (base + size >
> > memblock_start_of_DRAM() + linear_region_size).
> >
> > That means even linear mapping range check fail for initrd_start and
> > initrd_end, it still can get virtual address. Here we put
> > initrd_start/initrd_end to be calculated only when linear mapping check
> > pass.
> >
> > Fixes: c756c592e442 ("arm64: Utilize phys_initrd_start/phys_initrd_size")
>
> For future versions, please also cc the author of the original commit
> you are fixing.

Got it and thanks for ur warm reminder ^^

> >
> > arch/arm64/mm/init.c | 8 +++-----
> > 1 file changed, 3 insertions(+), 5 deletions(-)
> >
> > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> > index 7205a9085b4d..1adf418de685 100644
> > --- a/arch/arm64/mm/init.c
> > +++ b/arch/arm64/mm/init.c
> > @@ -425,6 +425,9 @@ void __init arm64_memblock_init(void)
> > memblock_remove(base, size); /* clear MEMBLOCK_ flags */
> > memblock_add(base, size);
> > memblock_reserve(base, size);
> > + /* the generic initrd code expects virtual addresses */
> > + initrd_start = __phys_to_virt(phys_initrd_start);
> > + initrd_end = initrd_start + phys_initrd_size;
> > }
> > }
> >
> > @@ -450,11 +453,6 @@ void __init arm64_memblock_init(void)
> > * pagetables with memblock.
> > */
> > memblock_reserve(__pa_symbol(_text), _end - _text);
> > - if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {
> > - /* the generic initrd code expects virtual addresses */
> > - initrd_start = __phys_to_virt(phys_initrd_start);
> > - initrd_end = initrd_start + phys_initrd_size;
> > - }
>
> With CONFIG_RANDOMIZE_BASE we can get a further change to memstart_addr
> after the place where you moved the initrd_{start,end} setting, which
> would result in a different value for __phys_to_virt(phys_initrd_start).

I found what you mean, since __phys_to_virt will use PHYS_OFFSET
(memstart_addr) for calculating.
How about moving CONFIG_RANDOMIZE_BASE part of code ahead of
CONFIG_BLK_DEV_INITRD checking?

That means below (d) moving ahead of (c)
prvious:
if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {} ---(a)
if (memory_limit != PHYS_ADDR_MAX) {} ---(b)
if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {} ---(c)
if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {} ---(d)

now:
if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) { ---(a)
if (memory_limit != PHYS_ADDR_MAX) {} ----------------(b)
if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {} --------------(d)
if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {} ---(c)

Appreciate your kind advice.

2019-04-01 15:01:10

by pierre kuo

[permalink] [raw]
Subject: Re: [PATCH v2 1/1] initrd: move initrd_start calculate within linear mapping range check

hi Catalin:

> > With CONFIG_RANDOMIZE_BASE we can get a further change to memstart_addr
> > after the place where you moved the initrd_{start,end} setting, which
> > would result in a different value for __phys_to_virt(phys_initrd_start).
>
> I found what you mean, since __phys_to_virt will use PHYS_OFFSET
> (memstart_addr) for calculating.
> How about moving CONFIG_RANDOMIZE_BASE part of code ahead of
> CONFIG_BLK_DEV_INITRD checking?
>
> That means below (d) moving ahead of (c)
> prvious:
> if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {} ---(a)
> if (memory_limit != PHYS_ADDR_MAX) {} ---(b)
> if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {} ---(c)
> if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {} ---(d)
>
> now:
> if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) { ---(a)
> if (memory_limit != PHYS_ADDR_MAX) {} ----------------(b)
> if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {} --------------(d)
> if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {} ---(c)
>

After tracing the kernel code,
is it even possible to move CONFIG_RANDOMIZE_BASE part of code ahead
of memory_limit?

that mean the flow may look like below:
now2:
if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {} ---(a)
if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {} ---(d)
if (memory_limit != PHYS_ADDR_MAX) {} ---(b)
if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {} ---(c)

in (b), the result of __pa_symbol(_text), memory_limit,
memblock_mem_limit_remove_map and memblock_add
are not depended on memsart_addr.
So the now2 flow can grouping modification of memstart_address, put
(a) and (d) together.

Sincerely Appreciate your comment.

2019-04-01 15:39:34

by Will Deacon

[permalink] [raw]
Subject: Re: [PATCH v2 1/1] initrd: move initrd_start calculate within linear mapping range check

[+Ard in case I'm missing something]

On Mon, Apr 01, 2019 at 10:59:53PM +0800, pierre kuo wrote:
> > > With CONFIG_RANDOMIZE_BASE we can get a further change to memstart_addr
> > > after the place where you moved the initrd_{start,end} setting, which
> > > would result in a different value for __phys_to_virt(phys_initrd_start).
> >
> > I found what you mean, since __phys_to_virt will use PHYS_OFFSET
> > (memstart_addr) for calculating.
> > How about moving CONFIG_RANDOMIZE_BASE part of code ahead of
> > CONFIG_BLK_DEV_INITRD checking?
> >
> > That means below (d) moving ahead of (c)
> > prvious:
> > if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {} ---(a)
> > if (memory_limit != PHYS_ADDR_MAX) {} ---(b)
> > if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {} ---(c)
> > if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {} ---(d)
> >
> > now:
> > if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) { ---(a)
> > if (memory_limit != PHYS_ADDR_MAX) {} ----------------(b)
> > if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {} --------------(d)
> > if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {} ---(c)
> >
>
> After tracing the kernel code,
> is it even possible to move CONFIG_RANDOMIZE_BASE part of code ahead
> of memory_limit?
>
> that mean the flow may look like below:
> now2:
> if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {} ---(a)
> if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {} ---(d)
> if (memory_limit != PHYS_ADDR_MAX) {} ---(b)
> if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {} ---(c)
>
> in (b), the result of __pa_symbol(_text), memory_limit,
> memblock_mem_limit_remove_map and memblock_add
> are not depended on memsart_addr.
> So the now2 flow can grouping modification of memstart_address, put
> (a) and (d) together.

I'm afraid that you've lost me with this. Why isn't it just as simple as
the diff below?

Will

--->8

diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index c29b17b520cd..ec3487c94b10 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -377,7 +377,7 @@ void __init arm64_memblock_init(void)
base + size > memblock_start_of_DRAM() +
linear_region_size,
"initrd not fully accessible via the linear mapping -- please check your bootloader ...\n")) {
- initrd_start = 0;
+ phys_initrd_size = 0;
} else {
memblock_remove(base, size); /* clear MEMBLOCK_ flags */
memblock_add(base, size);

2019-04-03 16:46:12

by pierre kuo

[permalink] [raw]
Subject: Re: [PATCH v2 1/1] initrd: move initrd_start calculate within linear mapping range check

hi Will:
>
> [+Ard in case I'm missing something]
>
> On Mon, Apr 01, 2019 at 10:59:53PM +0800, pierre kuo wrote:
> > > > With CONFIG_RANDOMIZE_BASE we can get a further change to memstart_addr
> > > > after the place where you moved the initrd_{start,end} setting, which
> > > > would result in a different value for __phys_to_virt(phys_initrd_start).
> > >
> > > I found what you mean, since __phys_to_virt will use PHYS_OFFSET
> > > (memstart_addr) for calculating.
> > > How about moving CONFIG_RANDOMIZE_BASE part of code ahead of
> > > CONFIG_BLK_DEV_INITRD checking?
> > >
> > > That means below (d) moving ahead of (c)
> > > prvious:
> > > if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {} ---(a)
> > > if (memory_limit != PHYS_ADDR_MAX) {} ---(b)
> > > if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {} ---(c)
> > > if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {} ---(d)
> > >
> > > now:
> > > if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) { ---(a)
> > > if (memory_limit != PHYS_ADDR_MAX) {} ----------------(b)
> > > if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {} --------------(d)
> > > if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {} ---(c)
> > >
> >
> > After tracing the kernel code,
> > is it even possible to move CONFIG_RANDOMIZE_BASE part of code ahead
> > of memory_limit?
> >
> > that mean the flow may look like below:
> > now2:
> > if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {} ---(a)
> > if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {} ---(d)
> > if (memory_limit != PHYS_ADDR_MAX) {} ---(b)
> > if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {} ---(c)
> >
> > in (b), the result of __pa_symbol(_text), memory_limit,
> > memblock_mem_limit_remove_map and memblock_add
> > are not depended on memsart_addr.
> > So the now2 flow can grouping modification of memstart_address, put
> > (a) and (d) together.
>
> I'm afraid that you've lost me with this.
welcome for ur kind suggestion ^^

>Why isn't it just as simple as
> the diff below?
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index c29b17b520cd..ec3487c94b10 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -377,7 +377,7 @@ void __init arm64_memblock_init(void)
> base + size > memblock_start_of_DRAM() +
> linear_region_size,
> "initrd not fully accessible via the linear mapping -- please check your bootloader ...\n")) {
> - initrd_start = 0;
> + phys_initrd_size = 0;
> } else {
> memblock_remove(base, size); /* clear MEMBLOCK_ flags */
> memblock_add(base, size);

This patch will also fix the issue, but it still needs 2 "if
comparisions" for getting initrd_start/initrd_end.
By possible grouping modification of memstart_address, and put
initrd_start/initrd_end to be calculated only when linear mapping check
pass. Maybe (just if) can let the code be more concise.

Sincerely appreciate all of yours great comment.

2019-04-03 17:25:43

by Will Deacon

[permalink] [raw]
Subject: Re: [PATCH v2 1/1] initrd: move initrd_start calculate within linear mapping range check

On Thu, Apr 04, 2019 at 12:44:25AM +0800, pierre kuo wrote:
> > On Mon, Apr 01, 2019 at 10:59:53PM +0800, pierre kuo wrote:
> > > > > With CONFIG_RANDOMIZE_BASE we can get a further change to memstart_addr
> > > > > after the place where you moved the initrd_{start,end} setting, which
> > > > > would result in a different value for __phys_to_virt(phys_initrd_start).
> > > >
> > > > I found what you mean, since __phys_to_virt will use PHYS_OFFSET
> > > > (memstart_addr) for calculating.
> > > > How about moving CONFIG_RANDOMIZE_BASE part of code ahead of
> > > > CONFIG_BLK_DEV_INITRD checking?
> > > >
> > > > That means below (d) moving ahead of (c)
> > > > prvious:
> > > > if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {} ---(a)
> > > > if (memory_limit != PHYS_ADDR_MAX) {} ---(b)
> > > > if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {} ---(c)
> > > > if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {} ---(d)
> > > >
> > > > now:
> > > > if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) { ---(a)
> > > > if (memory_limit != PHYS_ADDR_MAX) {} ----------------(b)
> > > > if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {} --------------(d)
> > > > if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {} ---(c)
> > > >
> > >
> > > After tracing the kernel code,
> > > is it even possible to move CONFIG_RANDOMIZE_BASE part of code ahead
> > > of memory_limit?
> > >
> > > that mean the flow may look like below:
> > > now2:
> > > if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {} ---(a)
> > > if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {} ---(d)
> > > if (memory_limit != PHYS_ADDR_MAX) {} ---(b)
> > > if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {} ---(c)
> > >
> > > in (b), the result of __pa_symbol(_text), memory_limit,
> > > memblock_mem_limit_remove_map and memblock_add
> > > are not depended on memsart_addr.
> > > So the now2 flow can grouping modification of memstart_address, put
> > > (a) and (d) together.
> >
> > I'm afraid that you've lost me with this.
> welcome for ur kind suggestion ^^
>
> >Why isn't it just as simple as
> > the diff below?
> > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> > index c29b17b520cd..ec3487c94b10 100644
> > --- a/arch/arm64/mm/init.c
> > +++ b/arch/arm64/mm/init.c
> > @@ -377,7 +377,7 @@ void __init arm64_memblock_init(void)
> > base + size > memblock_start_of_DRAM() +
> > linear_region_size,
> > "initrd not fully accessible via the linear mapping -- please check your bootloader ...\n")) {
> > - initrd_start = 0;
> > + phys_initrd_size = 0;
> > } else {
> > memblock_remove(base, size); /* clear MEMBLOCK_ flags */
> > memblock_add(base, size);
>
> This patch will also fix the issue, but it still needs 2 "if
> comparisions" for getting initrd_start/initrd_end.
> By possible grouping modification of memstart_address, and put
> initrd_start/initrd_end to be calculated only when linear mapping check
> pass. Maybe (just if) can let the code be more concise.

Maybe, but I don't think we've seen a patch which accomplishes that. I think
I'll go ahead and commit the basic one-liner, then we can always improve it
afterwards if somebody sends a patch. It's not like this is a fastpath.

Will

2019-04-03 17:28:50

by Florian Fainelli

[permalink] [raw]
Subject: Re: [PATCH v2 1/1] initrd: move initrd_start calculate within linear mapping range check

On 4/3/19 10:24 AM, Will Deacon wrote:
> On Thu, Apr 04, 2019 at 12:44:25AM +0800, pierre kuo wrote:
>>> On Mon, Apr 01, 2019 at 10:59:53PM +0800, pierre kuo wrote:
>>>>>> With CONFIG_RANDOMIZE_BASE we can get a further change to memstart_addr
>>>>>> after the place where you moved the initrd_{start,end} setting, which
>>>>>> would result in a different value for __phys_to_virt(phys_initrd_start).
>>>>>
>>>>> I found what you mean, since __phys_to_virt will use PHYS_OFFSET
>>>>> (memstart_addr) for calculating.
>>>>> How about moving CONFIG_RANDOMIZE_BASE part of code ahead of
>>>>> CONFIG_BLK_DEV_INITRD checking?
>>>>>
>>>>> That means below (d) moving ahead of (c)
>>>>> prvious:
>>>>> if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {} ---(a)
>>>>> if (memory_limit != PHYS_ADDR_MAX) {} ---(b)
>>>>> if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {} ---(c)
>>>>> if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {} ---(d)
>>>>>
>>>>> now:
>>>>> if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) { ---(a)
>>>>> if (memory_limit != PHYS_ADDR_MAX) {} ----------------(b)
>>>>> if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {} --------------(d)
>>>>> if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {} ---(c)
>>>>>
>>>>
>>>> After tracing the kernel code,
>>>> is it even possible to move CONFIG_RANDOMIZE_BASE part of code ahead
>>>> of memory_limit?
>>>>
>>>> that mean the flow may look like below:
>>>> now2:
>>>> if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {} ---(a)
>>>> if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {} ---(d)
>>>> if (memory_limit != PHYS_ADDR_MAX) {} ---(b)
>>>> if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {} ---(c)
>>>>
>>>> in (b), the result of __pa_symbol(_text), memory_limit,
>>>> memblock_mem_limit_remove_map and memblock_add
>>>> are not depended on memsart_addr.
>>>> So the now2 flow can grouping modification of memstart_address, put
>>>> (a) and (d) together.
>>>
>>> I'm afraid that you've lost me with this.
>> welcome for ur kind suggestion ^^
>>
>>> Why isn't it just as simple as
>>> the diff below?
>>> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
>>> index c29b17b520cd..ec3487c94b10 100644
>>> --- a/arch/arm64/mm/init.c
>>> +++ b/arch/arm64/mm/init.c
>>> @@ -377,7 +377,7 @@ void __init arm64_memblock_init(void)
>>> base + size > memblock_start_of_DRAM() +
>>> linear_region_size,
>>> "initrd not fully accessible via the linear mapping -- please check your bootloader ...\n")) {
>>> - initrd_start = 0;
>>> + phys_initrd_size = 0;
>>> } else {
>>> memblock_remove(base, size); /* clear MEMBLOCK_ flags */
>>> memblock_add(base, size);
>>
>> This patch will also fix the issue, but it still needs 2 "if
>> comparisions" for getting initrd_start/initrd_end.
>> By possible grouping modification of memstart_address, and put
>> initrd_start/initrd_end to be calculated only when linear mapping check
>> pass. Maybe (just if) can let the code be more concise.
>
> Maybe, but I don't think we've seen a patch which accomplishes that. I think
> I'll go ahead and commit the basic one-liner, then we can always improve it
> afterwards if somebody sends a patch. It's not like this is a fastpath.

Sorry for the slow response and introducing the bug in the first place,
yes, I agree here, an one-liner is a better way to get that fixed:

Acked-by: Florian Fainelli <[email protected]>
--
Florian

2019-04-08 18:09:43

by pierre kuo

[permalink] [raw]
Subject: Re: [PATCH v2 1/1] initrd: move initrd_start calculate within linear mapping range check

hi Will:
>
> Maybe, but I don't think we've seen a patch which accomplishes that. I think
> I'll go ahead and commit the basic one-liner, then we can always improve it
> afterwards if somebody sends a patch. It's not like this is a fastpath.

Sorry for not showing the patches I try to explain to sir.
I will attach v3 patches for ur reference.

Thanks for ur kind help,