2018-01-30 06:44:33

by Jia Zhang

[permalink] [raw]
Subject: [PATCH 1/2] /proc/kcore: Fix SMAP violation when dumping vsyscall user page

The commit df04abfd181a
("fs/proc/kcore.c: Add bounce buffer for ktext data") introduces a
bounce buffer to work around CONFIG_HARDENED_USERCOPY=y. However,
accessing vsyscall user page will cause SMAP violation in this way.

In order to fix this issue, simply replace memcpy() with copy_from_user()
may work, but using a common way to handle this sort of user page may be
useful for future.

Currently, only vsyscall page requires KCORE_USER.

Signed-off-by: Jia Zhang <[email protected]>
---
arch/x86/mm/init_64.c | 2 +-
fs/proc/kcore.c | 4 ++++
include/linux/kcore.h | 1 +
3 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 4a83728..dab78f6 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1187,7 +1187,7 @@ void __init mem_init(void)

/* Register memory areas for /proc/kcore */
kclist_add(&kcore_vsyscall, (void *)VSYSCALL_ADDR,
- PAGE_SIZE, KCORE_OTHER);
+ PAGE_SIZE, KCORE_USER);

mem_init_print_info(NULL);
}
diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c
index 4bc85cb..e4b0204 100644
--- a/fs/proc/kcore.c
+++ b/fs/proc/kcore.c
@@ -510,6 +510,10 @@ static void elf_kcore_store_hdr(char *bufp, int nphdr, int dataoff)
/* we have to zero-fill user buffer even if no read */
if (copy_to_user(buffer, buf, tsz))
return -EFAULT;
+ } else if (m->type == KCORE_USER) {
+ /* user page is handled prior to normal kernel page */
+ if (copy_to_user(buffer, (char *)start, tsz))
+ return -EFAULT;
} else {
if (kern_addr_valid(start)) {
unsigned long n;
diff --git a/include/linux/kcore.h b/include/linux/kcore.h
index 7ff25a8..80db19d 100644
--- a/include/linux/kcore.h
+++ b/include/linux/kcore.h
@@ -10,6 +10,7 @@ enum kcore_type {
KCORE_VMALLOC,
KCORE_RAM,
KCORE_VMEMMAP,
+ KCORE_USER,
KCORE_OTHER,
};

--
1.8.3.1



2018-01-30 06:44:06

by Jia Zhang

[permalink] [raw]
Subject: [PATCH 2/2] x86/mm/64: Add vsyscall page to /proc/kcore conditionally

The vsyscall page should be visible only if
vsyscall=emulate/native when dumping /proc/kcore.

Signed-off-by: Jia Zhang <[email protected]>
---
arch/x86/mm/init_64.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index dab78f6..3d4cf33 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1186,8 +1186,9 @@ void __init mem_init(void)
register_page_bootmem_info();

/* Register memory areas for /proc/kcore */
- kclist_add(&kcore_vsyscall, (void *)VSYSCALL_ADDR,
- PAGE_SIZE, KCORE_USER);
+ if (get_gate_vma(&init_mm))
+ kclist_add(&kcore_vsyscall, (void *)VSYSCALL_ADDR,
+ PAGE_SIZE, KCORE_USER);

mem_init_print_info(NULL);
}
--
1.8.3.1


2018-02-01 01:03:40

by Jia Zhang

[permalink] [raw]
Subject: Re: [PATCH 1/2] /proc/kcore: Fix SMAP violation when dumping vsyscall user page

Hi,

Are there any comments here?

Thanks,
Jia

On 2018/1/30 下午2:42, Jia Zhang wrote:
> The commit df04abfd181a
> ("fs/proc/kcore.c: Add bounce buffer for ktext data") introduces a
> bounce buffer to work around CONFIG_HARDENED_USERCOPY=y. However,
> accessing vsyscall user page will cause SMAP violation in this way.
>
> In order to fix this issue, simply replace memcpy() with copy_from_user()
> may work, but using a common way to handle this sort of user page may be
> useful for future.
>
> Currently, only vsyscall page requires KCORE_USER.
>
> Signed-off-by: Jia Zhang <[email protected]>
> ---
> arch/x86/mm/init_64.c | 2 +-
> fs/proc/kcore.c | 4 ++++
> include/linux/kcore.h | 1 +
> 3 files changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> index 4a83728..dab78f6 100644
> --- a/arch/x86/mm/init_64.c
> +++ b/arch/x86/mm/init_64.c
> @@ -1187,7 +1187,7 @@ void __init mem_init(void)
>
> /* Register memory areas for /proc/kcore */
> kclist_add(&kcore_vsyscall, (void *)VSYSCALL_ADDR,
> - PAGE_SIZE, KCORE_OTHER);
> + PAGE_SIZE, KCORE_USER);
>
> mem_init_print_info(NULL);
> }
> diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c
> index 4bc85cb..e4b0204 100644
> --- a/fs/proc/kcore.c
> +++ b/fs/proc/kcore.c
> @@ -510,6 +510,10 @@ static void elf_kcore_store_hdr(char *bufp, int nphdr, int dataoff)
> /* we have to zero-fill user buffer even if no read */
> if (copy_to_user(buffer, buf, tsz))
> return -EFAULT;
> + } else if (m->type == KCORE_USER) {
> + /* user page is handled prior to normal kernel page */
> + if (copy_to_user(buffer, (char *)start, tsz))
> + return -EFAULT;
> } else {
> if (kern_addr_valid(start)) {
> unsigned long n;
> diff --git a/include/linux/kcore.h b/include/linux/kcore.h
> index 7ff25a8..80db19d 100644
> --- a/include/linux/kcore.h
> +++ b/include/linux/kcore.h
> @@ -10,6 +10,7 @@ enum kcore_type {
> KCORE_VMALLOC,
> KCORE_RAM,
> KCORE_VMEMMAP,
> + KCORE_USER,
> KCORE_OTHER,
> };
>
>

2018-02-05 01:34:33

by Jia Zhang

[permalink] [raw]
Subject: Re: [PATCH 1/2] /proc/kcore: Fix SMAP violation when dumping vsyscall user page

Hi Jiri,

The maintainers are too busy to review this patchset. You are the author
of the commit df04abfd181a. Please help to review this patchset.

Thanks,
Jia

On 2018/1/30 下午2:42, Jia Zhang wrote:
> The commit df04abfd181a
> ("fs/proc/kcore.c: Add bounce buffer for ktext data") introduces a
> bounce buffer to work around CONFIG_HARDENED_USERCOPY=y. However,
> accessing vsyscall user page will cause SMAP violation in this way.
>
> In order to fix this issue, simply replace memcpy() with copy_from_user()
> may work, but using a common way to handle this sort of user page may be
> useful for future.
>
> Currently, only vsyscall page requires KCORE_USER.
>
> Signed-off-by: Jia Zhang <[email protected]>
> ---
> arch/x86/mm/init_64.c | 2 +-
> fs/proc/kcore.c | 4 ++++
> include/linux/kcore.h | 1 +
> 3 files changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> index 4a83728..dab78f6 100644
> --- a/arch/x86/mm/init_64.c
> +++ b/arch/x86/mm/init_64.c
> @@ -1187,7 +1187,7 @@ void __init mem_init(void)
>
> /* Register memory areas for /proc/kcore */
> kclist_add(&kcore_vsyscall, (void *)VSYSCALL_ADDR,
> - PAGE_SIZE, KCORE_OTHER);
> + PAGE_SIZE, KCORE_USER);
>
> mem_init_print_info(NULL);
> }
> diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c
> index 4bc85cb..e4b0204 100644
> --- a/fs/proc/kcore.c
> +++ b/fs/proc/kcore.c
> @@ -510,6 +510,10 @@ static void elf_kcore_store_hdr(char *bufp, int nphdr, int dataoff)
> /* we have to zero-fill user buffer even if no read */
> if (copy_to_user(buffer, buf, tsz))
> return -EFAULT;
> + } else if (m->type == KCORE_USER) {
> + /* user page is handled prior to normal kernel page */
> + if (copy_to_user(buffer, (char *)start, tsz))
> + return -EFAULT;
> } else {
> if (kern_addr_valid(start)) {
> unsigned long n;
> diff --git a/include/linux/kcore.h b/include/linux/kcore.h
> index 7ff25a8..80db19d 100644
> --- a/include/linux/kcore.h
> +++ b/include/linux/kcore.h
> @@ -10,6 +10,7 @@ enum kcore_type {
> KCORE_VMALLOC,
> KCORE_RAM,
> KCORE_VMEMMAP,
> + KCORE_USER,
> KCORE_OTHER,
> };
>
>

2018-02-05 09:28:02

by Jiri Olsa

[permalink] [raw]
Subject: Re: [PATCH 2/2] x86/mm/64: Add vsyscall page to /proc/kcore conditionally

On Tue, Jan 30, 2018 at 02:42:59PM +0800, Jia Zhang wrote:
> The vsyscall page should be visible only if
> vsyscall=emulate/native when dumping /proc/kcore.
>
> Signed-off-by: Jia Zhang <[email protected]>
> ---
> arch/x86/mm/init_64.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> index dab78f6..3d4cf33 100644
> --- a/arch/x86/mm/init_64.c
> +++ b/arch/x86/mm/init_64.c
> @@ -1186,8 +1186,9 @@ void __init mem_init(void)
> register_page_bootmem_info();
>
> /* Register memory areas for /proc/kcore */
> - kclist_add(&kcore_vsyscall, (void *)VSYSCALL_ADDR,
> - PAGE_SIZE, KCORE_USER);
> + if (get_gate_vma(&init_mm))
> + kclist_add(&kcore_vsyscall, (void *)VSYSCALL_ADDR,
> + PAGE_SIZE, KCORE_USER);

nit, we use { } when there's more than 1 line code

anyway the approach looks ok to me

Reviewed-by: Jiri Olsa <[email protected]>

thanks,
jirka

2018-02-09 01:08:29

by Jia Zhang

[permalink] [raw]
Subject: Re: [PATCH 2/2] x86/mm/64: Add vsyscall page to /proc/kcore conditionally

Hi,

Anybody else here who can give an attention on this review?

Thanks,
Jia

On 2018/2/5 下午5:26, Jiri Olsa wrote:
> On Tue, Jan 30, 2018 at 02:42:59PM +0800, Jia Zhang wrote:
>> The vsyscall page should be visible only if
>> vsyscall=emulate/native when dumping /proc/kcore.
>>
>> Signed-off-by: Jia Zhang <[email protected]>
>> ---
>> arch/x86/mm/init_64.c | 5 +++--
>> 1 file changed, 3 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
>> index dab78f6..3d4cf33 100644
>> --- a/arch/x86/mm/init_64.c
>> +++ b/arch/x86/mm/init_64.c
>> @@ -1186,8 +1186,9 @@ void __init mem_init(void)
>> register_page_bootmem_info();
>>
>> /* Register memory areas for /proc/kcore */
>> - kclist_add(&kcore_vsyscall, (void *)VSYSCALL_ADDR,
>> - PAGE_SIZE, KCORE_USER);
>> + if (get_gate_vma(&init_mm))
>> + kclist_add(&kcore_vsyscall, (void *)VSYSCALL_ADDR,
>> + PAGE_SIZE, KCORE_USER);
>
> nit, we use { } when there's more than 1 line code
>
> anyway the approach looks ok to me
>
> Reviewed-by: Jiri Olsa <[email protected]>
>
> thanks,
> jirka
>

2018-02-12 12:29:45

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH 2/2] x86/mm/64: Add vsyscall page to /proc/kcore conditionally

On Fri, 9 Feb 2018, Jia Zhang wrote:
>
> Anybody else here who can give an attention on this review?

Jiri gave you perfectly valid feedback. Please address that and repost a V2.

Thanks,

tglx