Hi,
This series aims to restructure the external interfaces as well internal
code used in fs/proc/vmcore.c by removing the interchangeable use of user
and kernel pointers. This unnecessary conversion may obstruct the tools
such as sparse in generating meaningful results. This also simplifies
the things by keeping the user and kernel pointers separate during
propagation.
The external interfaces such as copy_oldmem_page, copy_oldmem_page_encrypted
and read_from_oldmem are used across different architectures. The goal
here is to update one architecture at a time and hence there is an extra
cleanup done in the end to remove the intermediaries.
In this series, an extra user pointer is added as a parameter to all the
above functions instead of an union as there were disagreement in earlier ideas
of using universal pointer [1,2]. This series is posted as RFC so as to
find out an acceptable way of handling this use case.
This series is based on v5.16-rc3 and is compile tested for modified
architectures and boot tested in qemu for all architectures except ia64.
Note: This patch series breaks the crash dump functionality after patch
3 and is restored after each arch implements its own
copy_oldmem_page_buf() interface.
Thanks,
Amit Daniel
[1]: https://lore.kernel.org/lkml/[email protected]/
[2]: https://lore.kernel.org/lkml/CAHk-=wit9enePELG=-HnLsr0nY5bucFNjqAqWoFTuYDGR1P4KA@mail.gmail.com/
Amit Daniel Kachhap (14):
fs/proc/vmcore: Update read_from_oldmem() for user pointer
fs/proc/vmcore: Update copy_oldmem_page_encrypted() for user buffer
fs/proc/vmcore: Update copy_oldmem_page() for user buffer
x86/crash_dump_64: Use the new interface copy_oldmem_page_buf
x86/crash_dump_32: Use the new interface copy_oldmem_page_buf
arm64/crash_dump: Use the new interface copy_oldmem_page_buf
arm/crash_dump: Use the new interface copy_oldmem_page_buf
mips/crash_dump: Use the new interface copy_oldmem_page_buf
sh/crash_dump: Use the new interface copy_oldmem_page_buf
riscv/crash_dump: Use the new interface copy_oldmem_page_buf
powerpc/crash_dump: Use the new interface copy_oldmem_page_buf
ia64/crash_dump: Use the new interface copy_oldmem_page_buf
s390/crash_dump: Use the new interface copy_oldmem_page_buf
fs/proc/vmcore: Remove the unused old interface copy_oldmem_page
arch/arm/kernel/crash_dump.c | 21 ++++----
arch/arm64/kernel/crash_dump.c | 21 ++++----
arch/ia64/kernel/crash_dump.c | 25 +++++-----
arch/mips/kernel/crash_dump.c | 24 ++++-----
arch/powerpc/kernel/crash_dump.c | 33 +++++++------
arch/riscv/kernel/crash_dump.c | 25 +++++-----
arch/s390/kernel/crash_dump.c | 12 ++---
arch/sh/kernel/crash_dump.c | 25 +++++-----
arch/x86/kernel/crash_dump_32.c | 24 ++++-----
arch/x86/kernel/crash_dump_64.c | 48 +++++++++---------
fs/proc/vmcore.c | 85 +++++++++++++++++---------------
include/linux/crash_dump.h | 23 +++++----
12 files changed, 189 insertions(+), 177 deletions(-)
--
2.17.1
The exported interface read_from_oldmem() passes user pointer
without __user annotation and does unnecessary user/kernel pointer
conversions during the pointer propagation.
Hence it is modified to have a new parameter for user pointer.
Also a helper macro read_from_oldmem_to_kernel is added for clear
readability from callsite when calling read_from_oldmem() for copying
to kernel buffer.
There are several internal functions used here such as read_vmcore(),
__read_vmcore(), copy_to() and vmcoredd_copy_dumps() which are
re-structured to avoid the unnecessary user/kernel pointer conversions.
x86_64 crash dump is also modified to use this new interface.
No functionality change intended.
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Dave Young <[email protected]>
Cc: Baoquan He <[email protected]>
Cc: Vivek Goyal <[email protected]>
Cc: x86 <[email protected]>
Cc: kexec <[email protected]>
Cc: linux-fsdevel <[email protected]>
Signed-off-by: Amit Daniel Kachhap <[email protected]>
---
arch/x86/kernel/crash_dump_64.c | 4 +-
fs/proc/vmcore.c | 88 +++++++++++++++++++--------------
include/linux/crash_dump.h | 12 +++--
3 files changed, 60 insertions(+), 44 deletions(-)
diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c
index a7f617a3981d..6433513ef43a 100644
--- a/arch/x86/kernel/crash_dump_64.c
+++ b/arch/x86/kernel/crash_dump_64.c
@@ -74,6 +74,6 @@ ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos)
{
- return read_from_oldmem(buf, count, ppos, 0,
- cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT));
+ return read_from_oldmem_to_kernel(buf, count, ppos,
+ cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT));
}
diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
index 509f85148fee..39b4353bd37c 100644
--- a/fs/proc/vmcore.c
+++ b/fs/proc/vmcore.c
@@ -70,6 +70,14 @@ static bool vmcore_cb_unstable;
/* Whether the vmcore has been opened once. */
static bool vmcore_opened;
+#define ptr_add(utarget, ktarget, size) \
+({ \
+ if (utarget) \
+ utarget += size; \
+ else \
+ ktarget += size; \
+})
+
void register_vmcore_cb(struct vmcore_cb *cb)
{
down_write(&vmcore_cb_rwsem);
@@ -132,9 +140,8 @@ static int open_vmcore(struct inode *inode, struct file *file)
}
/* Reads a page from the oldmem device from given offset. */
-ssize_t read_from_oldmem(char *buf, size_t count,
- u64 *ppos, int userbuf,
- bool encrypted)
+ssize_t read_from_oldmem(char __user *ubuf, char *kbuf, size_t count,
+ u64 *ppos, bool encrypted)
{
unsigned long pfn, offset;
size_t nr_bytes;
@@ -156,19 +163,27 @@ ssize_t read_from_oldmem(char *buf, size_t count,
/* If pfn is not ram, return zeros for sparse dump files */
if (!pfn_is_ram(pfn)) {
tmp = 0;
- if (!userbuf)
- memset(buf, 0, nr_bytes);
- else if (clear_user(buf, nr_bytes))
+ if (kbuf)
+ memset(kbuf, 0, nr_bytes);
+ else if (clear_user(ubuf, nr_bytes))
tmp = -EFAULT;
} else {
- if (encrypted)
- tmp = copy_oldmem_page_encrypted(pfn, buf,
+ if (encrypted && ubuf)
+ tmp = copy_oldmem_page_encrypted(pfn,
+ (__force char *)ubuf,
+ nr_bytes,
+ offset, 1);
+ else if (encrypted && kbuf)
+ tmp = copy_oldmem_page_encrypted(pfn,
+ kbuf,
nr_bytes,
- offset,
- userbuf);
+ offset, 0);
+ else if (ubuf)
+ tmp = copy_oldmem_page(pfn, (__force char *)ubuf,
+ nr_bytes, offset, 1);
else
- tmp = copy_oldmem_page(pfn, buf, nr_bytes,
- offset, userbuf);
+ tmp = copy_oldmem_page(pfn, kbuf, nr_bytes,
+ offset, 0);
}
if (tmp < 0) {
up_read(&vmcore_cb_rwsem);
@@ -177,7 +192,7 @@ ssize_t read_from_oldmem(char *buf, size_t count,
*ppos += nr_bytes;
count -= nr_bytes;
- buf += nr_bytes;
+ ptr_add(ubuf, kbuf, nr_bytes);
read += nr_bytes;
++pfn;
offset = 0;
@@ -206,7 +221,7 @@ void __weak elfcorehdr_free(unsigned long long addr)
*/
ssize_t __weak elfcorehdr_read(char *buf, size_t count, u64 *ppos)
{
- return read_from_oldmem(buf, count, ppos, 0, false);
+ return read_from_oldmem_to_kernel(buf, count, ppos, false);
}
/*
@@ -214,7 +229,7 @@ ssize_t __weak elfcorehdr_read(char *buf, size_t count, u64 *ppos)
*/
ssize_t __weak elfcorehdr_read_notes(char *buf, size_t count, u64 *ppos)
{
- return read_from_oldmem(buf, count, ppos, 0, cc_platform_has(CC_ATTR_MEM_ENCRYPT));
+ return read_from_oldmem_to_kernel(buf, count, ppos, cc_platform_has(CC_ATTR_MEM_ENCRYPT));
}
/*
@@ -239,21 +254,21 @@ copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
}
/*
- * Copy to either kernel or user space
+ * Copy to either kernel or user space buffer
*/
-static int copy_to(void *target, void *src, size_t size, int userbuf)
+static int copy_to(void __user *utarget, void *ktarget, void *src, size_t size)
{
- if (userbuf) {
- if (copy_to_user((char __user *) target, src, size))
+ if (utarget) {
+ if (copy_to_user(utarget, src, size))
return -EFAULT;
} else {
- memcpy(target, src, size);
+ memcpy(ktarget, src, size);
}
return 0;
}
#ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP
-static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf)
+static int vmcoredd_copy_dumps(void __user *udst, void *kdst, u64 start, size_t size)
{
struct vmcoredd_node *dump;
u64 offset = 0;
@@ -266,15 +281,14 @@ static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf)
if (start < offset + dump->size) {
tsz = min(offset + (u64)dump->size - start, (u64)size);
buf = dump->buf + start - offset;
- if (copy_to(dst, buf, tsz, userbuf)) {
+ if (copy_to(udst, kdst, buf, tsz)) {
ret = -EFAULT;
goto out_unlock;
}
size -= tsz;
start += tsz;
- dst += tsz;
-
+ ptr_add(udst, kdst, tsz);
/* Leave now if buffer filled already */
if (!size)
goto out_unlock;
@@ -329,8 +343,8 @@ static int vmcoredd_mmap_dumps(struct vm_area_struct *vma, unsigned long dst,
/* Read from the ELF header and then the crash dump. On error, negative value is
* returned otherwise number of bytes read are returned.
*/
-static ssize_t __read_vmcore(char *buffer, size_t buflen, loff_t *fpos,
- int userbuf)
+static ssize_t __read_vmcore(char __user *ubuffer, char *kbuffer, size_t buflen,
+ loff_t *fpos)
{
ssize_t acc = 0, tmp;
size_t tsz;
@@ -347,11 +361,11 @@ static ssize_t __read_vmcore(char *buffer, size_t buflen, loff_t *fpos,
/* Read ELF core header */
if (*fpos < elfcorebuf_sz) {
tsz = min(elfcorebuf_sz - (size_t)*fpos, buflen);
- if (copy_to(buffer, elfcorebuf + *fpos, tsz, userbuf))
+ if (copy_to(ubuffer, kbuffer, elfcorebuf + *fpos, tsz))
return -EFAULT;
buflen -= tsz;
*fpos += tsz;
- buffer += tsz;
+ ptr_add(ubuffer, kbuffer, tsz);
acc += tsz;
/* leave now if filled buffer already */
@@ -378,12 +392,12 @@ static ssize_t __read_vmcore(char *buffer, size_t buflen, loff_t *fpos,
tsz = min(elfcorebuf_sz + vmcoredd_orig_sz -
(size_t)*fpos, buflen);
start = *fpos - elfcorebuf_sz;
- if (vmcoredd_copy_dumps(buffer, start, tsz, userbuf))
+ if (vmcoredd_copy_dumps(ubuffer, kbuffer, start, tsz))
return -EFAULT;
buflen -= tsz;
*fpos += tsz;
- buffer += tsz;
+ ptr_add(ubuffer, kbuffer, tsz);
acc += tsz;
/* leave now if filled buffer already */
@@ -395,12 +409,12 @@ static ssize_t __read_vmcore(char *buffer, size_t buflen, loff_t *fpos,
/* Read remaining elf notes */
tsz = min(elfcorebuf_sz + elfnotes_sz - (size_t)*fpos, buflen);
kaddr = elfnotes_buf + *fpos - elfcorebuf_sz - vmcoredd_orig_sz;
- if (copy_to(buffer, kaddr, tsz, userbuf))
+ if (copy_to(ubuffer, kbuffer, kaddr, tsz))
return -EFAULT;
buflen -= tsz;
*fpos += tsz;
- buffer += tsz;
+ ptr_add(ubuffer, kbuffer, tsz);
acc += tsz;
/* leave now if filled buffer already */
@@ -414,13 +428,13 @@ static ssize_t __read_vmcore(char *buffer, size_t buflen, loff_t *fpos,
m->offset + m->size - *fpos,
buflen);
start = m->paddr + *fpos - m->offset;
- tmp = read_from_oldmem(buffer, tsz, &start,
- userbuf, cc_platform_has(CC_ATTR_MEM_ENCRYPT));
+ tmp = read_from_oldmem(ubuffer, kbuffer, tsz, &start,
+ cc_platform_has(CC_ATTR_MEM_ENCRYPT));
if (tmp < 0)
return tmp;
buflen -= tsz;
*fpos += tsz;
- buffer += tsz;
+ ptr_add(ubuffer, kbuffer, tsz);
acc += tsz;
/* leave now if filled buffer already */
@@ -435,7 +449,7 @@ static ssize_t __read_vmcore(char *buffer, size_t buflen, loff_t *fpos,
static ssize_t read_vmcore(struct file *file, char __user *buffer,
size_t buflen, loff_t *fpos)
{
- return __read_vmcore((__force char *) buffer, buflen, fpos, 1);
+ return __read_vmcore(buffer, NULL, buflen, fpos);
}
/*
@@ -461,7 +475,7 @@ static vm_fault_t mmap_vmcore_fault(struct vm_fault *vmf)
if (!PageUptodate(page)) {
offset = (loff_t) index << PAGE_SHIFT;
buf = __va((page_to_pfn(page) << PAGE_SHIFT));
- rc = __read_vmcore(buf, PAGE_SIZE, &offset, 0);
+ rc = __read_vmcore(NULL, buf, PAGE_SIZE, &offset);
if (rc < 0) {
unlock_page(page);
put_page(page);
diff --git a/include/linux/crash_dump.h b/include/linux/crash_dump.h
index 620821549b23..eb0ed423ccc8 100644
--- a/include/linux/crash_dump.h
+++ b/include/linux/crash_dump.h
@@ -135,16 +135,18 @@ static inline int vmcore_add_device_dump(struct vmcoredd_data *data)
#endif /* CONFIG_PROC_VMCORE_DEVICE_DUMP */
#ifdef CONFIG_PROC_VMCORE
-ssize_t read_from_oldmem(char *buf, size_t count,
- u64 *ppos, int userbuf,
- bool encrypted);
+ssize_t read_from_oldmem(char __user *ubuf, char *kbuf, size_t count,
+ u64 *ppos, bool encrypted);
#else
-static inline ssize_t read_from_oldmem(char *buf, size_t count,
- u64 *ppos, int userbuf,
+static inline ssize_t read_from_oldmem(char __user *ubuf, char *kbuf,
+ size_t count, u64 *ppos,
bool encrypted)
{
return -EOPNOTSUPP;
}
#endif /* CONFIG_PROC_VMCORE */
+#define read_from_oldmem_to_kernel(buf, count, ppos, encrypted) \
+ read_from_oldmem(NULL, buf, count, ppos, encrypted)
+
#endif /* LINUX_CRASHDUMP_H */
--
2.17.1
The exported interface copy_oldmem_page_encrypted() passes user pointer
without __user annotation and does unnecessary user/kernel pointer
conversions during the pointer propagation.
Hence it is modified to have a new parameter for user pointer. The
other similar interface copy_oldmem_page() will be updated in the
subsequent patches.
x86_64 crash dump is also modified to use this modified interface.
No functionality change intended.
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Dave Young <[email protected]>
Cc: Baoquan He <[email protected]>
Cc: Vivek Goyal <[email protected]>
Cc: x86 <[email protected]>
Cc: kexec <[email protected]>
Cc: linux-fsdevel <[email protected]>
Signed-off-by: Amit Daniel Kachhap <[email protected]>
---
arch/x86/kernel/crash_dump_64.c | 12 +++++++++---
fs/proc/vmcore.c | 24 +++++++++++-------------
include/linux/crash_dump.h | 6 +++---
3 files changed, 23 insertions(+), 19 deletions(-)
diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c
index 6433513ef43a..99cd505628fa 100644
--- a/arch/x86/kernel/crash_dump_64.c
+++ b/arch/x86/kernel/crash_dump_64.c
@@ -66,10 +66,16 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
* memory with the encryption mask set to accommodate kdump on SME-enabled
* machines.
*/
-ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
- unsigned long offset, int userbuf)
+ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char __user *ubuf,
+ char *kbuf, size_t csize,
+ unsigned long offset)
{
- return __copy_oldmem_page(pfn, buf, csize, offset, userbuf, true);
+ if (ubuf)
+ return __copy_oldmem_page(pfn, (__force char *)ubuf, csize,
+ offset, 1, true);
+ else
+ return __copy_oldmem_page(pfn, kbuf, csize,
+ offset, 0, true);
}
ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos)
diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
index 39b4353bd37c..fa4492ef6124 100644
--- a/fs/proc/vmcore.c
+++ b/fs/proc/vmcore.c
@@ -168,16 +168,10 @@ ssize_t read_from_oldmem(char __user *ubuf, char *kbuf, size_t count,
else if (clear_user(ubuf, nr_bytes))
tmp = -EFAULT;
} else {
- if (encrypted && ubuf)
- tmp = copy_oldmem_page_encrypted(pfn,
- (__force char *)ubuf,
- nr_bytes,
- offset, 1);
- else if (encrypted && kbuf)
- tmp = copy_oldmem_page_encrypted(pfn,
- kbuf,
- nr_bytes,
- offset, 0);
+ if (encrypted)
+ tmp = copy_oldmem_page_encrypted(pfn, ubuf,
+ kbuf, nr_bytes,
+ offset);
else if (ubuf)
tmp = copy_oldmem_page(pfn, (__force char *)ubuf,
nr_bytes, offset, 1);
@@ -247,10 +241,14 @@ int __weak remap_oldmem_pfn_range(struct vm_area_struct *vma,
* Architectures which support memory encryption override this.
*/
ssize_t __weak
-copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
- unsigned long offset, int userbuf)
+copy_oldmem_page_encrypted(unsigned long pfn, char __user *ubuf, char *kbuf,
+ size_t csize, unsigned long offset)
{
- return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
+ if (ubuf)
+ return copy_oldmem_page(pfn, (__force char *)ubuf, csize,
+ offset, 1);
+ else
+ return copy_oldmem_page(pfn, kbuf, csize, offset, 0);
}
/*
diff --git a/include/linux/crash_dump.h b/include/linux/crash_dump.h
index eb0ed423ccc8..36a7f08f4ad2 100644
--- a/include/linux/crash_dump.h
+++ b/include/linux/crash_dump.h
@@ -26,9 +26,9 @@ extern int remap_oldmem_pfn_range(struct vm_area_struct *vma,
extern ssize_t copy_oldmem_page(unsigned long, char *, size_t,
unsigned long, int);
-extern ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf,
- size_t csize, unsigned long offset,
- int userbuf);
+extern ssize_t copy_oldmem_page_encrypted(unsigned long pfn,
+ char __user *ubuf, char *kbuf,
+ size_t csize, unsigned long offset);
void vmcore_cleanup(void);
--
2.17.1
The exported interface copy_oldmem_page passes user pointer without
__user annotation and does unnecessary user/kernel pointer
conversions during the pointer propagation.
Hence it is modified to have a new parameter for user pointer. However
instead of updating it directly a new interface copy_oldmem_page_buf()
is added as copy_oldmem_page() is used across different archs. This old
interface copy_oldmem_page() will be deleted after all the archs are
modified to use the new interface.
The weak implementation of both copy_oldmem_page() and
copy_oldmem_page_buf() are added temporarily to keep the kernel building
for the subsequent patches. As a consequence, crash dump is temporarily
broken for the archs till the patch where it implements its own
copy_oldmem_page_buf().
Cc: Dave Young <[email protected]>
Cc: Baoquan He <[email protected]>
Cc: Vivek Goyal <[email protected]>
Cc: kexec <[email protected]>
Cc: linux-fsdevel <[email protected]>
Signed-off-by: Amit Daniel Kachhap <[email protected]>
---
fs/proc/vmcore.c | 27 +++++++++++++++++----------
include/linux/crash_dump.h | 3 +++
2 files changed, 20 insertions(+), 10 deletions(-)
diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
index fa4492ef6124..d01b85c043dd 100644
--- a/fs/proc/vmcore.c
+++ b/fs/proc/vmcore.c
@@ -172,12 +172,9 @@ ssize_t read_from_oldmem(char __user *ubuf, char *kbuf, size_t count,
tmp = copy_oldmem_page_encrypted(pfn, ubuf,
kbuf, nr_bytes,
offset);
- else if (ubuf)
- tmp = copy_oldmem_page(pfn, (__force char *)ubuf,
- nr_bytes, offset, 1);
else
- tmp = copy_oldmem_page(pfn, kbuf, nr_bytes,
- offset, 0);
+ tmp = copy_oldmem_page_buf(pfn, ubuf, kbuf,
+ nr_bytes, offset);
}
if (tmp < 0) {
up_read(&vmcore_cb_rwsem);
@@ -244,11 +241,21 @@ ssize_t __weak
copy_oldmem_page_encrypted(unsigned long pfn, char __user *ubuf, char *kbuf,
size_t csize, unsigned long offset)
{
- if (ubuf)
- return copy_oldmem_page(pfn, (__force char *)ubuf, csize,
- offset, 1);
- else
- return copy_oldmem_page(pfn, kbuf, csize, offset, 0);
+ return copy_oldmem_page_buf(pfn, ubuf, kbuf, csize, offset);
+}
+
+ssize_t __weak
+copy_oldmem_page_buf(unsigned long pfn, char __user *ubuf, char *kbuf,
+ size_t csize, unsigned long offset)
+{
+ return -EOPNOTSUPP;
+}
+
+ssize_t __weak
+copy_oldmem_page(unsigned long pfn, char *ubuf, size_t csize,
+ unsigned long offset, int userbuf)
+{
+ return -EOPNOTSUPP;
}
/*
diff --git a/include/linux/crash_dump.h b/include/linux/crash_dump.h
index 36a7f08f4ad2..725c4e053ecf 100644
--- a/include/linux/crash_dump.h
+++ b/include/linux/crash_dump.h
@@ -26,6 +26,9 @@ extern int remap_oldmem_pfn_range(struct vm_area_struct *vma,
extern ssize_t copy_oldmem_page(unsigned long, char *, size_t,
unsigned long, int);
+extern ssize_t copy_oldmem_page_buf(unsigned long pfn, char __user *ubuf,
+ char *kbuf, size_t csize,
+ unsigned long offset);
extern ssize_t copy_oldmem_page_encrypted(unsigned long pfn,
char __user *ubuf, char *kbuf,
size_t csize, unsigned long offset);
--
2.17.1
The current interface copy_oldmem_page() passes user pointer without
__user annotation and hence does unnecessary user/kernel pointer
conversions during its implementation.
Implement the interface copy_oldmem_page_buf() to avoid this issue.
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: x86 <[email protected]>
Signed-off-by: Amit Daniel Kachhap <[email protected]>
---
arch/x86/kernel/crash_dump_64.c | 44 +++++++++++++++------------------
1 file changed, 20 insertions(+), 24 deletions(-)
diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c
index 99cd505628fa..7a6fa797260f 100644
--- a/arch/x86/kernel/crash_dump_64.c
+++ b/arch/x86/kernel/crash_dump_64.c
@@ -12,9 +12,9 @@
#include <linux/io.h>
#include <linux/cc_platform.h>
-static ssize_t __copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
- unsigned long offset, int userbuf,
- bool encrypted)
+static ssize_t __copy_oldmem_page(unsigned long pfn, char __user *ubuf,
+ char *kbuf, size_t csize,
+ unsigned long offset, bool encrypted)
{
void *vaddr;
@@ -29,13 +29,13 @@ static ssize_t __copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
if (!vaddr)
return -ENOMEM;
- if (userbuf) {
- if (copy_to_user((void __user *)buf, vaddr + offset, csize)) {
+ if (ubuf) {
+ if (copy_to_user(ubuf, vaddr + offset, csize)) {
iounmap((void __iomem *)vaddr);
return -EFAULT;
}
} else
- memcpy(buf, vaddr + offset, csize);
+ memcpy(kbuf, vaddr + offset, csize);
set_iounmap_nonlazy();
iounmap((void __iomem *)vaddr);
@@ -43,39 +43,35 @@ static ssize_t __copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
}
/**
- * copy_oldmem_page - copy one page of memory
+ * copy_oldmem_page_buf - copy one page of memory
* @pfn: page frame number to be copied
- * @buf: target memory address for the copy; this can be in kernel address
- * space or user address space (see @userbuf)
+ * @ubuf: target user memory pointer for the copy; use copy_to_user() if this
+ * pointer is not NULL
+ * @kbuf: target kernel memory pointer for the copy; use memcpy() if this
+ * pointer is not NULL
* @csize: number of bytes to copy
* @offset: offset in bytes into the page (based on pfn) to begin the copy
- * @userbuf: if set, @buf is in user address space, use copy_to_user(),
- * otherwise @buf is in kernel address space, use memcpy().
*
- * Copy a page from the old kernel's memory. For this page, there is no pte
- * mapped in the current kernel. We stitch up a pte, similar to kmap_atomic.
+ * Copy a page from the old kernel's memory into the buffer pointed either by
+ * @ubuf or @kbuf. For this page, there is no pte mapped in the current kernel.
+ * We stitch up a pte, similar to kmap_atomic.
*/
-ssize_t copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
- unsigned long offset, int userbuf)
+ssize_t copy_oldmem_page_buf(unsigned long pfn, char __user *ubuf, char *kbuf,
+ size_t csize, unsigned long offset)
{
- return __copy_oldmem_page(pfn, buf, csize, offset, userbuf, false);
+ return __copy_oldmem_page(pfn, ubuf, kbuf, csize, offset, false);
}
/**
- * copy_oldmem_page_encrypted - same as copy_oldmem_page() above but ioremap the
- * memory with the encryption mask set to accommodate kdump on SME-enabled
+ * copy_oldmem_page_encrypted - same as copy_oldmem_page_buf() above but ioremap
+ * the memory with the encryption mask set to accommodate kdump on SME-enabled
* machines.
*/
ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char __user *ubuf,
char *kbuf, size_t csize,
unsigned long offset)
{
- if (ubuf)
- return __copy_oldmem_page(pfn, (__force char *)ubuf, csize,
- offset, 1, true);
- else
- return __copy_oldmem_page(pfn, kbuf, csize,
- offset, 0, true);
+ return __copy_oldmem_page(pfn, ubuf, kbuf, csize, offset, true);
}
ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos)
--
2.17.1
The current interface copy_oldmem_page() passes user pointer without
__user annotation and hence does unnecessary user/kernel pointer
conversions during its implementation.
Implement the interface copy_oldmem_page_buf() to avoid this issue.
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: x86 <[email protected]>
Signed-off-by: Amit Daniel Kachhap <[email protected]>
---
arch/x86/kernel/crash_dump_32.c | 24 ++++++++++++------------
1 file changed, 12 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kernel/crash_dump_32.c b/arch/x86/kernel/crash_dump_32.c
index 5fcac46aaf6b..9932cd62ded1 100644
--- a/arch/x86/kernel/crash_dump_32.c
+++ b/arch/x86/kernel/crash_dump_32.c
@@ -30,20 +30,20 @@ static inline bool is_crashed_pfn_valid(unsigned long pfn)
}
/**
- * copy_oldmem_page - copy one page from "oldmem"
+ * copy_oldmem_page_buf - copy one page from "oldmem"
* @pfn: page frame number to be copied
- * @buf: target memory address for the copy; this can be in kernel address
- * space or user address space (see @userbuf)
+ * @ubuf: target user memory pointer for the copy; use copy_to_user() if this
+ * pointer is not NULL
+ * @kbuf: target kernel memory pointer for the copy; use memcpy() if this
+ * pointer is not NULL
* @csize: number of bytes to copy
* @offset: offset in bytes into the page (based on pfn) to begin the copy
- * @userbuf: if set, @buf is in user address space, use copy_to_user(),
- * otherwise @buf is in kernel address space, use memcpy().
*
- * Copy a page from "oldmem". For this page, there might be no pte mapped
- * in the current kernel.
+ * Copy a page from "oldmem" into the buffer pointed by either @ubuf or @kbuf.
+ * For this page, there might be no pte mapped in the current kernel.
*/
-ssize_t copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
- unsigned long offset, int userbuf)
+ssize_t copy_oldmem_page_buf(unsigned long pfn, char __user *ubuf, char *kbuf,
+ size_t csize, unsigned long offset)
{
void *vaddr;
@@ -55,10 +55,10 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
vaddr = kmap_local_pfn(pfn);
- if (!userbuf) {
- memcpy(buf, vaddr + offset, csize);
+ if (!ubuf) {
+ memcpy(kbuf, vaddr + offset, csize);
} else {
- if (copy_to_user(buf, vaddr + offset, csize))
+ if (copy_to_user(ubuf, vaddr + offset, csize))
csize = -EFAULT;
}
--
2.17.1
The current interface copy_oldmem_page() passes user pointer without
__user annotation and hence does unnecessary user/kernel pointer
conversions during its implementation.
Implement the interface copy_oldmem_page_buf() to avoid this issue.
Cc: Russell King <[email protected]>
Cc: linux-arm-kernel <[email protected]>
Signed-off-by: Amit Daniel Kachhap <[email protected]>
---
arch/arm/kernel/crash_dump.c | 21 ++++++++++-----------
1 file changed, 10 insertions(+), 11 deletions(-)
diff --git a/arch/arm/kernel/crash_dump.c b/arch/arm/kernel/crash_dump.c
index 53cb92435392..bb0395ab9f98 100644
--- a/arch/arm/kernel/crash_dump.c
+++ b/arch/arm/kernel/crash_dump.c
@@ -16,20 +16,19 @@
#include <linux/io.h>
/**
- * copy_oldmem_page() - copy one page from old kernel memory
+ * copy_oldmem_page_buf() - copy one page from old kernel memory
* @pfn: page frame number to be copied
- * @buf: buffer where the copied page is placed
+ * @ubuf: user buffer where the copied page is placed
+ * @kbuf: kernel buffer where the copied page is placed
* @csize: number of bytes to copy
* @offset: offset in bytes into the page
- * @userbuf: if set, @buf is int he user address space
*
* This function copies one page from old kernel memory into buffer pointed by
- * @buf. If @buf is in userspace, set @userbuf to %1. Returns number of bytes
- * copied or negative error in case of failure.
+ * either @ubuf or @kbuf. Returns number of bytes copied or negative error in
+ * case of failure.
*/
-ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
- size_t csize, unsigned long offset,
- int userbuf)
+ssize_t copy_oldmem_page_buf(unsigned long pfn, char __user *ubuf, char *kbuf,
+ size_t csize, unsigned long offset)
{
void *vaddr;
@@ -40,13 +39,13 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
if (!vaddr)
return -ENOMEM;
- if (userbuf) {
- if (copy_to_user(buf, vaddr + offset, csize)) {
+ if (ubuf) {
+ if (copy_to_user(ubuf, vaddr + offset, csize)) {
iounmap(vaddr);
return -EFAULT;
}
} else {
- memcpy(buf, vaddr + offset, csize);
+ memcpy(kbuf, vaddr + offset, csize);
}
iounmap(vaddr);
--
2.17.1
The current interface copy_oldmem_page() passes user pointer without
__user annotation and hence does unnecessary user/kernel pointer
conversions during its implementation.
Use the interface copy_oldmem_page_buf() to avoid this issue.
Cc: Thomas Bogendoerfer <[email protected]>
Cc: linux-mips <[email protected]>
Signed-off-by: Amit Daniel Kachhap <[email protected]>
---
arch/mips/kernel/crash_dump.c | 24 ++++++++++++------------
1 file changed, 12 insertions(+), 12 deletions(-)
diff --git a/arch/mips/kernel/crash_dump.c b/arch/mips/kernel/crash_dump.c
index 2e50f55185a6..f2406b868a27 100644
--- a/arch/mips/kernel/crash_dump.c
+++ b/arch/mips/kernel/crash_dump.c
@@ -3,20 +3,20 @@
#include <linux/crash_dump.h>
/**
- * copy_oldmem_page - copy one page from "oldmem"
+ * copy_oldmem_page_buf - copy one page from "oldmem"
* @pfn: page frame number to be copied
- * @buf: target memory address for the copy; this can be in kernel address
- * space or user address space (see @userbuf)
+ * @ubuf: target user memory pointer for the copy; use copy_to_user() if this
+ * pointer is not NULL
+ * @kbuf: target kernel memory pointer for the copy; use memcpy() if this
+ * pointer is not NULL
* @csize: number of bytes to copy
* @offset: offset in bytes into the page (based on pfn) to begin the copy
- * @userbuf: if set, @buf is in user address space, use copy_to_user(),
- * otherwise @buf is in kernel address space, use memcpy().
*
- * Copy a page from "oldmem". For this page, there is no pte mapped
- * in the current kernel.
+ * Copy a page from "oldmem" into buffer pointed by either @ubuf or @kbuf. For
+ * this page, there is no pte mapped in the current kernel.
*/
-ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
- size_t csize, unsigned long offset, int userbuf)
+ssize_t copy_oldmem_page_buf(unsigned long pfn, char __user *ubuf, char *kbuf,
+ size_t csize, unsigned long offset)
{
void *vaddr;
@@ -25,10 +25,10 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
vaddr = kmap_local_pfn(pfn);
- if (!userbuf) {
- memcpy(buf, vaddr + offset, csize);
+ if (kbuf) {
+ memcpy(kbuf, vaddr + offset, csize);
} else {
- if (copy_to_user(buf, vaddr + offset, csize))
+ if (copy_to_user(ubuf, vaddr + offset, csize))
csize = -EFAULT;
}
--
2.17.1
The current interface copy_oldmem_page() passes user pointer without
__user annotation and hence does unnecessary user/kernel pointer
conversions during its implementation.
Use the interface copy_oldmem_page_buf() to avoid this issue.
Cc: Yoshinori Sato <[email protected]>
Cc: Rich Felker <[email protected]>
CC: linux-sh <[email protected]>
Signed-off-by: Amit Daniel Kachhap <[email protected]>
---
arch/sh/kernel/crash_dump.c | 25 +++++++++++++------------
1 file changed, 13 insertions(+), 12 deletions(-)
diff --git a/arch/sh/kernel/crash_dump.c b/arch/sh/kernel/crash_dump.c
index 5b41b59698c1..a80dedf1ace5 100644
--- a/arch/sh/kernel/crash_dump.c
+++ b/arch/sh/kernel/crash_dump.c
@@ -11,20 +11,21 @@
#include <linux/uaccess.h>
/**
- * copy_oldmem_page - copy one page from "oldmem"
+ * copy_oldmem_page_buf - copy one page from "oldmem"
* @pfn: page frame number to be copied
- * @buf: target memory address for the copy; this can be in kernel address
- * space or user address space (see @userbuf)
+ * @ubuf: target user memory pointer for the copy; use copy_to_user() if this
+ * pointer is not NULL
+ * @kbuf: target kernel memory pointer for the copy; use memcpy() if this
+ * pointer is not NULL
* @csize: number of bytes to copy
* @offset: offset in bytes into the page (based on pfn) to begin the copy
- * @userbuf: if set, @buf is in user address space, use copy_to_user(),
- * otherwise @buf is in kernel address space, use memcpy().
*
- * Copy a page from "oldmem". For this page, there is no pte mapped
- * in the current kernel. We stitch up a pte, similar to kmap_atomic.
+ * Copy a page from "oldmem" into the buffer pointed by either @ubuf or @kbuf.
+ * For this page, there is no pte mapped in the current kernel. We stitch up a
+ * pte, similar to kmap_atomic.
*/
-ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
- size_t csize, unsigned long offset, int userbuf)
+ssize_t copy_oldmem_page_buf(unsigned long pfn, char __user *ubuf, char *kbuf,
+ size_t csize, unsigned long offset)
{
void __iomem *vaddr;
@@ -33,13 +34,13 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
vaddr = ioremap(pfn << PAGE_SHIFT, PAGE_SIZE);
- if (userbuf) {
- if (copy_to_user((void __user *)buf, (vaddr + offset), csize)) {
+ if (ubuf) {
+ if (copy_to_user(ubuf, (vaddr + offset), csize)) {
iounmap(vaddr);
return -EFAULT;
}
} else
- memcpy(buf, (vaddr + offset), csize);
+ memcpy(kbuf, (vaddr + offset), csize);
iounmap(vaddr);
return csize;
--
2.17.1
The current interface copy_oldmem_page() passes user pointer without
__user annotation and hence does unnecessary user/kernel pointer
conversions during its implementation.
Implement the interface copy_oldmem_page_buf() to avoid this issue.
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: linux-arm-kernel <[email protected]>
Signed-off-by: Amit Daniel Kachhap <[email protected]>
---
arch/arm64/kernel/crash_dump.c | 21 ++++++++++-----------
1 file changed, 10 insertions(+), 11 deletions(-)
diff --git a/arch/arm64/kernel/crash_dump.c b/arch/arm64/kernel/crash_dump.c
index 58303a9ec32c..ec9d5c80726c 100644
--- a/arch/arm64/kernel/crash_dump.c
+++ b/arch/arm64/kernel/crash_dump.c
@@ -14,20 +14,19 @@
#include <asm/memory.h>
/**
- * copy_oldmem_page() - copy one page from old kernel memory
+ * copy_oldmem_page_buf() - copy one page from old kernel memory
* @pfn: page frame number to be copied
- * @buf: buffer where the copied page is placed
+ * @ubuf: user buffer where the copied page is placed
+ * @kbuf: kernel buffer where the copied page is placed
* @csize: number of bytes to copy
* @offset: offset in bytes into the page
- * @userbuf: if set, @buf is in a user address space
*
* This function copies one page from old kernel memory into buffer pointed by
- * @buf. If @buf is in userspace, set @userbuf to %1. Returns number of bytes
- * copied or negative error in case of failure.
+ * either @ubuf or @kbuf. Returns number of bytes copied or negative error in
+ * case of failure.
*/
-ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
- size_t csize, unsigned long offset,
- int userbuf)
+ssize_t copy_oldmem_page_buf(unsigned long pfn, char __user *ubuf, char *kbuf,
+ size_t csize, unsigned long offset)
{
void *vaddr;
@@ -38,13 +37,13 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
if (!vaddr)
return -ENOMEM;
- if (userbuf) {
- if (copy_to_user((char __user *)buf, vaddr + offset, csize)) {
+ if (ubuf) {
+ if (copy_to_user(ubuf, vaddr + offset, csize)) {
memunmap(vaddr);
return -EFAULT;
}
} else {
- memcpy(buf, vaddr + offset, csize);
+ memcpy(kbuf, vaddr + offset, csize);
}
memunmap(vaddr);
--
2.17.1
The current interface copy_oldmem_page() passes user pointer without
__user annotation and hence does unnecessary user/kernel pointer
conversions during its implementation.
Use the interface copy_oldmem_page_buf() to avoid this issue.
Cc: Paul Walmsley <[email protected]>
Cc: Palmer Dabbelt <[email protected]>
Cc: Albert Ou <[email protected]>
Cc: linux-riscv <[email protected]>
Signed-off-by: Amit Daniel Kachhap <[email protected]>
---
arch/riscv/kernel/crash_dump.c | 25 +++++++++++++------------
1 file changed, 13 insertions(+), 12 deletions(-)
diff --git a/arch/riscv/kernel/crash_dump.c b/arch/riscv/kernel/crash_dump.c
index 86cc0ada5752..3a8fadcd1278 100644
--- a/arch/riscv/kernel/crash_dump.c
+++ b/arch/riscv/kernel/crash_dump.c
@@ -9,20 +9,21 @@
#include <linux/io.h>
/**
- * copy_oldmem_page() - copy one page from old kernel memory
+ * copy_oldmem_page_buf() - copy one page from old kernel memory
* @pfn: page frame number to be copied
- * @buf: buffer where the copied page is placed
+ * @ubuf: user buffer where the copied page is placed; use copy_to_user() if
+ * this pointer is not NULL
+ * @kbuf: kernel buffer where the copied page is placed; use memcpy() if this
+ * pointer is not NULL
* @csize: number of bytes to copy
* @offset: offset in bytes into the page
- * @userbuf: if set, @buf is in a user address space
*
- * This function copies one page from old kernel memory into buffer pointed by
- * @buf. If @buf is in userspace, set @userbuf to %1. Returns number of bytes
- * copied or negative error in case of failure.
+ * This function copies one page from old kernel memory into the buffer pointed
+ * by either @ubuf or @kbuf. Returns number of bytes copied or negative error in
+ * case of failure.
*/
-ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
- size_t csize, unsigned long offset,
- int userbuf)
+ssize_t copy_oldmem_page_buf(unsigned long pfn, char __user *ubuf, char *kbuf,
+ size_t csize, unsigned long offset)
{
void *vaddr;
@@ -33,13 +34,13 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
if (!vaddr)
return -ENOMEM;
- if (userbuf) {
- if (copy_to_user((char __user *)buf, vaddr + offset, csize)) {
+ if (ubuf) {
+ if (copy_to_user(ubuf, vaddr + offset, csize)) {
memunmap(vaddr);
return -EFAULT;
}
} else
- memcpy(buf, vaddr + offset, csize);
+ memcpy(kbuf, vaddr + offset, csize);
memunmap(vaddr);
return csize;
--
2.17.1
The current interface copy_oldmem_page() passes user pointer without
__user annotation and hence does unnecessary user/kernel pointer
conversions during its implementation.
Use the interface copy_oldmem_page_buf() to avoid this issue.
Cc: Michael Ellerman <[email protected]>
CC: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: linuxppc <[email protected]>
Signed-off-by: Amit Daniel Kachhap <[email protected]>
---
arch/powerpc/kernel/crash_dump.c | 33 ++++++++++++++++----------------
1 file changed, 17 insertions(+), 16 deletions(-)
diff --git a/arch/powerpc/kernel/crash_dump.c b/arch/powerpc/kernel/crash_dump.c
index 5693e1c67c2b..a1c8a3a11694 100644
--- a/arch/powerpc/kernel/crash_dump.c
+++ b/arch/powerpc/kernel/crash_dump.c
@@ -68,33 +68,34 @@ void __init setup_kdump_trampoline(void)
}
#endif /* CONFIG_NONSTATIC_KERNEL */
-static size_t copy_oldmem_vaddr(void *vaddr, char *buf, size_t csize,
- unsigned long offset, int userbuf)
+static size_t copy_oldmem_vaddr(void *vaddr, char __user *ubuf, char *kbuf,
+ size_t csize, unsigned long offset)
{
- if (userbuf) {
- if (copy_to_user((char __user *)buf, (vaddr + offset), csize))
+ if (ubuf) {
+ if (copy_to_user(ubuf, (vaddr + offset), csize))
return -EFAULT;
} else
- memcpy(buf, (vaddr + offset), csize);
+ memcpy(kbuf, (vaddr + offset), csize);
return csize;
}
/**
- * copy_oldmem_page - copy one page from "oldmem"
+ * copy_oldmem_page_buf - copy one page from "oldmem"
* @pfn: page frame number to be copied
- * @buf: target memory address for the copy; this can be in kernel address
- * space or user address space (see @userbuf)
+ * @ubuf: target user memory address for the copy; use copy_to_user() if this
+ * address is present
+ * @kbuf: target kernel memory address for the copy; use memcpy() if this
+ * address is present
* @csize: number of bytes to copy
* @offset: offset in bytes into the page (based on pfn) to begin the copy
- * @userbuf: if set, @buf is in user address space, use copy_to_user(),
- * otherwise @buf is in kernel address space, use memcpy().
*
- * Copy a page from "oldmem". For this page, there is no pte mapped
- * in the current kernel. We stitch up a pte, similar to kmap_atomic.
+ * Copy a page from "oldmem" into buffer pointed by either @ubuf or @kbuf. For
+ * this page, there is no pte mapped in the current kernel. We stitch up a pte,
+ * similar to kmap_atomic.
*/
-ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
- size_t csize, unsigned long offset, int userbuf)
+ssize_t copy_oldmem_page_buf(unsigned long pfn, char __user *ubuf, char *kbuf,
+ size_t csize, unsigned long offset)
{
void *vaddr;
phys_addr_t paddr;
@@ -107,10 +108,10 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
if (memblock_is_region_memory(paddr, csize)) {
vaddr = __va(paddr);
- csize = copy_oldmem_vaddr(vaddr, buf, csize, offset, userbuf);
+ csize = copy_oldmem_vaddr(vaddr, ubuf, kbuf, csize, offset);
} else {
vaddr = ioremap_cache(paddr, PAGE_SIZE);
- csize = copy_oldmem_vaddr(vaddr, buf, csize, offset, userbuf);
+ csize = copy_oldmem_vaddr(vaddr, ubuf, kbuf, csize, offset);
iounmap(vaddr);
}
--
2.17.1
The current interface copy_oldmem_page() passes user pointer without
__user annotation and hence does unnecessary user/kernel pointer
conversions during its implementation.
Use the interface copy_oldmem_page_buf() to avoid this issue.
Cc: linux-ia64 <[email protected]>
Signed-off-by: Amit Daniel Kachhap <[email protected]>
---
arch/ia64/kernel/crash_dump.c | 25 +++++++++++++------------
1 file changed, 13 insertions(+), 12 deletions(-)
diff --git a/arch/ia64/kernel/crash_dump.c b/arch/ia64/kernel/crash_dump.c
index 0ed3c3dee4cd..1aea8dbe06de 100644
--- a/arch/ia64/kernel/crash_dump.c
+++ b/arch/ia64/kernel/crash_dump.c
@@ -15,37 +15,38 @@
#include <linux/uaccess.h>
/**
- * copy_oldmem_page - copy one page from "oldmem"
+ * copy_oldmem_page_buf - copy one page from "oldmem"
* @pfn: page frame number to be copied
- * @buf: target memory address for the copy; this can be in kernel address
- * space or user address space (see @userbuf)
+ * @ubuf: target user memory pointer for the copy; use copy_to_user() if this
+ * pointer is not NULL
+ * @kbuf: target kernel memory pointer for the copy; use memcpy() if this
+ * pointer is not NULL
* @csize: number of bytes to copy
* @offset: offset in bytes into the page (based on pfn) to begin the copy
- * @userbuf: if set, @buf is in user address space, use copy_to_user(),
- * otherwise @buf is in kernel address space, use memcpy().
*
- * Copy a page from "oldmem". For this page, there is no pte mapped
- * in the current kernel. We stitch up a pte, similar to kmap_atomic.
+ * Copy a page from "oldmem" into the buffer pointed by either @ubuf or @kbuf.
+ * For this page, there is no pte mapped in the current kernel. We stitch up a
+ * pte, similar to kmap_atomic.
*
* Calling copy_to_user() in atomic context is not desirable. Hence first
* copying the data to a pre-allocated kernel page and then copying to user
* space in non-atomic context.
*/
ssize_t
-copy_oldmem_page(unsigned long pfn, char *buf,
- size_t csize, unsigned long offset, int userbuf)
+copy_oldmem_page_buf(unsigned long pfn, char __user *ubuf, char *kbuf,
+ size_t csize, unsigned long offset)
{
void *vaddr;
if (!csize)
return 0;
vaddr = __va(pfn<<PAGE_SHIFT);
- if (userbuf) {
- if (copy_to_user(buf, (vaddr + offset), csize)) {
+ if (ubuf) {
+ if (copy_to_user(ubuf, (vaddr + offset), csize)) {
return -EFAULT;
}
} else
- memcpy(buf, (vaddr + offset), csize);
+ memcpy(kbuf, (vaddr + offset), csize);
return csize;
}
--
2.17.1
The current interface copy_oldmem_page() passes user pointer without
__user annotation and hence does unnecessary user/kernel pointer
conversions during its implementation.
Use the interface copy_oldmem_page_buf() to avoid this issue.
Cc: Heiko Carstens <[email protected]>
Cc: Vasily Gorbik <[email protected]>
Cc: Christian Borntraeger <[email protected]>
CC: Alexander Gordeev <[email protected]>
Cc: linux-s390 <[email protected]>
Signed-off-by: Amit Daniel Kachhap <[email protected]>
---
arch/s390/kernel/crash_dump.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/s390/kernel/crash_dump.c b/arch/s390/kernel/crash_dump.c
index 785d54c9350c..b1f8a908e8ca 100644
--- a/arch/s390/kernel/crash_dump.c
+++ b/arch/s390/kernel/crash_dump.c
@@ -214,8 +214,8 @@ static int copy_oldmem_user(void __user *dst, void *src, size_t count)
/*
* Copy one page from "oldmem"
*/
-ssize_t copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
- unsigned long offset, int userbuf)
+ssize_t copy_oldmem_page_buf(unsigned long pfn, char __user *ubuf, char *kbuf,
+ size_t csize, unsigned long offset)
{
void *src;
int rc;
@@ -223,10 +223,10 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
if (!csize)
return 0;
src = (void *) (pfn << PAGE_SHIFT) + offset;
- if (userbuf)
- rc = copy_oldmem_user((void __force __user *) buf, src, csize);
+ if (ubuf)
+ rc = copy_oldmem_user((void __user *) ubuf, src, csize);
else
- rc = copy_oldmem_kernel((void *) buf, src, csize);
+ rc = copy_oldmem_kernel((void *) kbuf, src, csize);
return rc;
}
@@ -261,7 +261,7 @@ static int remap_oldmem_pfn_range_kdump(struct vm_area_struct *vma,
* Remap "oldmem" for zfcp/nvme dump
*
* We only map available memory above HSA size. Memory below HSA size
- * is read on demand using the copy_oldmem_page() function.
+ * is read on demand using the copy_oldmem_page_buf() function.
*/
static int remap_oldmem_pfn_range_zfcpdump(struct vm_area_struct *vma,
unsigned long from,
--
2.17.1
As all archs have upgraded to use the new interface copy_oldmem_page_buf()
so remove the unused copy_oldmem_page. Also remove the weak definitions.
Cc: Dave Young <[email protected]>
Cc: Baoquan He <[email protected]>
Cc: Vivek Goyal <[email protected]>
Cc: kexec <[email protected]>
Cc: linux-fsdevel <[email protected]>
Signed-off-by: Amit Daniel Kachhap <[email protected]>
---
fs/proc/vmcore.c | 14 --------------
include/linux/crash_dump.h | 2 --
2 files changed, 16 deletions(-)
diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
index d01b85c043dd..1edababe4bde 100644
--- a/fs/proc/vmcore.c
+++ b/fs/proc/vmcore.c
@@ -244,20 +244,6 @@ copy_oldmem_page_encrypted(unsigned long pfn, char __user *ubuf, char *kbuf,
return copy_oldmem_page_buf(pfn, ubuf, kbuf, csize, offset);
}
-ssize_t __weak
-copy_oldmem_page_buf(unsigned long pfn, char __user *ubuf, char *kbuf,
- size_t csize, unsigned long offset)
-{
- return -EOPNOTSUPP;
-}
-
-ssize_t __weak
-copy_oldmem_page(unsigned long pfn, char *ubuf, size_t csize,
- unsigned long offset, int userbuf)
-{
- return -EOPNOTSUPP;
-}
-
/*
* Copy to either kernel or user space buffer
*/
diff --git a/include/linux/crash_dump.h b/include/linux/crash_dump.h
index 725c4e053ecf..e897bdc0b7bf 100644
--- a/include/linux/crash_dump.h
+++ b/include/linux/crash_dump.h
@@ -24,8 +24,6 @@ extern int remap_oldmem_pfn_range(struct vm_area_struct *vma,
unsigned long from, unsigned long pfn,
unsigned long size, pgprot_t prot);
-extern ssize_t copy_oldmem_page(unsigned long, char *, size_t,
- unsigned long, int);
extern ssize_t copy_oldmem_page_buf(unsigned long pfn, char __user *ubuf,
char *kbuf, size_t csize,
unsigned long offset);
--
2.17.1
On Fri, Dec 03, 2021 at 04:12:18PM +0530, Amit Daniel Kachhap wrote:
> + return read_from_oldmem_to_kernel(buf, count, ppos,
> + cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT));
Overly long line.
> +ssize_t read_from_oldmem(char __user *ubuf, char *kbuf, size_t count,
> + u64 *ppos, bool encrypted)
> {
> unsigned long pfn, offset;
> size_t nr_bytes;
> @@ -156,19 +163,27 @@ ssize_t read_from_oldmem(char *buf, size_t count,
> /* If pfn is not ram, return zeros for sparse dump files */
> if (!pfn_is_ram(pfn)) {
> tmp = 0;
> - if (!userbuf)
> - memset(buf, 0, nr_bytes);
> - else if (clear_user(buf, nr_bytes))
> + if (kbuf)
> + memset(kbuf, 0, nr_bytes);
> + else if (clear_user(ubuf, nr_bytes))
> tmp = -EFAULT;
This looks like a huge mess. What speak against using an iov_iter
here?
On Mon, Dec 06, 2021 at 03:04:51PM +0100, Christoph Hellwig wrote:
> This looks like a huge mess. What speak against using an iov_iter
> here?
I coincidentally made a start on this last night. Happy to stop.
What do you think to adding a generic copy_pfn_to_iter()? Not sure
which APIs to use to implement it ... some architectures have weird
requirements about which APIs can be used for what kinds of PFNs.
diff --git a/arch/arm/kernel/crash_dump.c b/arch/arm/kernel/crash_dump.c
index 53cb92435392..ed0d806a4d14 100644
--- a/arch/arm/kernel/crash_dump.c
+++ b/arch/arm/kernel/crash_dump.c
@@ -16,39 +16,28 @@
#include <linux/io.h>
/**
- * copy_oldmem_page() - copy one page from old kernel memory
- * @pfn: page frame number to be copied
- * @buf: buffer where the copied page is placed
- * @csize: number of bytes to copy
- * @offset: offset in bytes into the page
- * @userbuf: if set, @buf is int he user address space
+ * copy_pfn_to_iter() - copy one page from old kernel memory.
+ * @iter: Where to copy the page to.
+ * @pfn: Page frame number to be copied.
+ * @offset: Offset in bytes into the page.
+ * @len: Number of bytes to copy.
*
- * This function copies one page from old kernel memory into buffer pointed by
- * @buf. If @buf is in userspace, set @userbuf to %1. Returns number of bytes
- * copied or negative error in case of failure.
+ * This function copies (part of) one page from old kernel memory into
+ * memory described by @iter.
+ *
+ * Return: Number of bytes copied or negative error in case of failure.
*/
-ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
- size_t csize, unsigned long offset,
- int userbuf)
+ssize_t copy_pfn_to_iter(struct iov_iter *iter, unsigned long pfn,
+ size_t offset, size_t len)
{
void *vaddr;
- if (!csize)
- return 0;
-
vaddr = ioremap(__pfn_to_phys(pfn), PAGE_SIZE);
if (!vaddr)
return -ENOMEM;
- if (userbuf) {
- if (copy_to_user(buf, vaddr + offset, csize)) {
- iounmap(vaddr);
- return -EFAULT;
- }
- } else {
- memcpy(buf, vaddr + offset, csize);
- }
+ len = copy_to_iter(vadd + offset, len, iter);
iounmap(vaddr);
- return csize;
+ return len;
}
diff --git a/include/linux/crash_dump.h b/include/linux/crash_dump.h
index 620821549b23..2dd3a692bcb7 100644
--- a/include/linux/crash_dump.h
+++ b/include/linux/crash_dump.h
@@ -24,8 +24,8 @@ extern int remap_oldmem_pfn_range(struct vm_area_struct *vma,
unsigned long from, unsigned long pfn,
unsigned long size, pgprot_t prot);
-extern ssize_t copy_oldmem_page(unsigned long, char *, size_t,
- unsigned long, int);
+ssize_t copy_pfn_to_iter(struct iov_iter *i, unsigned long pfn, size_t offset,
+ size_t len);
extern ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf,
size_t csize, unsigned long offset,
int userbuf);
On Mon, Dec 06, 2021 at 02:17:24PM +0000, Matthew Wilcox wrote:
> On Mon, Dec 06, 2021 at 03:04:51PM +0100, Christoph Hellwig wrote:
> > This looks like a huge mess. What speak against using an iov_iter
> > here?
>
> I coincidentally made a start on this last night. Happy to stop.
Don't stop!
> What do you think to adding a generic copy_pfn_to_iter()? Not sure
> which APIs to use to implement it ... some architectures have weird
> requirements about which APIs can be used for what kinds of PFNs.
Hmm. I though kmap_local_pfn(_prot) is all we need?
On Mon, Dec 06, 2021 at 03:54:22PM +0100, Christoph Hellwig wrote:
> On Mon, Dec 06, 2021 at 02:17:24PM +0000, Matthew Wilcox wrote:
> > On Mon, Dec 06, 2021 at 03:04:51PM +0100, Christoph Hellwig wrote:
> > > This looks like a huge mess. What speak against using an iov_iter
> > > here?
> >
> > I coincidentally made a start on this last night. Happy to stop.
>
> Don't stop!
>
> > What do you think to adding a generic copy_pfn_to_iter()? Not sure
> > which APIs to use to implement it ... some architectures have weird
> > requirements about which APIs can be used for what kinds of PFNs.
>
> Hmm. I though kmap_local_pfn(_prot) is all we need?
In the !HIGHMEM case, that calls pfn_to_page(), and I think the
point of this path is that we don't have a struct page for this pfn.
On 12/6/21 7:34 PM, Christoph Hellwig wrote:
> On Fri, Dec 03, 2021 at 04:12:18PM +0530, Amit Daniel Kachhap wrote:
>> + return read_from_oldmem_to_kernel(buf, count, ppos,
>> + cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT));
>
> Overly long line.
>
>> +ssize_t read_from_oldmem(char __user *ubuf, char *kbuf, size_t count,
>> + u64 *ppos, bool encrypted)
>> {
>> unsigned long pfn, offset;
>> size_t nr_bytes;
>> @@ -156,19 +163,27 @@ ssize_t read_from_oldmem(char *buf, size_t count,
>> /* If pfn is not ram, return zeros for sparse dump files */
>> if (!pfn_is_ram(pfn)) {
>> tmp = 0;
>> - if (!userbuf)
>> - memset(buf, 0, nr_bytes);
>> - else if (clear_user(buf, nr_bytes))
>> + if (kbuf)
>> + memset(kbuf, 0, nr_bytes);
>> + else if (clear_user(ubuf, nr_bytes))
>> tmp = -EFAULT;
>
> This looks like a huge mess. What speak against using an iov_iter
> here?
iov_iter seems to be a reasonable way. As a start I thought of adding
minimal changes.
>
On Mon, Dec 06, 2021 at 03:07:15PM +0000, Matthew Wilcox wrote:
> > > What do you think to adding a generic copy_pfn_to_iter()? Not sure
> > > which APIs to use to implement it ... some architectures have weird
> > > requirements about which APIs can be used for what kinds of PFNs.
> >
> > Hmm. I though kmap_local_pfn(_prot) is all we need?
>
> In the !HIGHMEM case, that calls pfn_to_page(), and I think the
> point of this path is that we don't have a struct page for this pfn.
Indeed. But to me this suggest that the !highmem stub is broken and
we should probably fix it rather than adding yet another interface.