2022-12-04 01:48:08

by Baoquan He

[permalink] [raw]
Subject: [PATCH v1 0/7] mm/vmalloc.c: allow vread() to read out vm_map_ram areas

Problem:
***
Stephen reported vread() will skip vm_map_ram areas when reading out
/proc/kcore with drgn utility. Please see below link to get more about
it:

/proc/kcore reads 0's for vmap_block
https://lore.kernel.org/all/[email protected]/T/#u

Root cause:
***
The normal vmalloc API uses struct vmap_area to manage the virtual
kernel area allocated and associate a vm_struct to store more information
and passed out. However, area reserved through vm_map_ram() interface
doesn't allocate vm_struct to bind with. So the current code in vread()
will skip the vm_map_ram area through 'if (!va->vm)' conditional checking.

Solution:
***
There are two types of vm_map_ram area. One is the whole vmap_area being
reserved and mapped at one time; the other is the whole vmap_area with
VMAP_BLOCK_SIZE size being reserved at one time, while mapped into split
regions with smaller size several times via vb_alloc(). I will call the
2nd type vb region.

In patch 1 and 2, add flags into struct vmap_area to mark these two types
of vm_map_ram area, meanwhile add bitmap field used_map into struct
vmap_block to mark those vb regions being used to differentiate with dirty
and free regions in vmap_block.

With the help of above vmap_area->flags and vmap_block->used_map, we can
recognize them in vread() and handle them respectively in patch 3.

Now since we can identify vm_map_ram area explicitly, change the
ambiguous checking 'if (!va->vm) with clear "if (!va->vm && (va->flags & VMAP_RAM))"
in s_show() to pick out vm_map_ram area clearly. This avoids the normal
vmalloc area being unmapped to be recognized as vm_map_ram area. This is
done in pach 4.

Besides,
***
In patch 5, let's ignore vmap area with VM_UNINITIALIZED set in
vm->flags, because this kind of area is created by calling
__vmalloc_node_range(), VM_UNINITIALIZED set indicating it has
vm_struct associated with, but is still during the page allocating and
mapping process.

In patch 6 and 7, change area flag from VM_ALLOC to VM_IOREMAP in two
places. This will show them as 'ioremap' in /proc/vmallocinfo, and
exclude them from /proc/kcore.

Testing
***
Stephen helped test RFC and the draft fix patch. Now I only put this v1
patchset on a bare metal to compile and run for basic functionality
testing. It still needs Stephen to help test the vm_map_ram issue.

Changelog
***
RFC->v1:
- Add a new field flags in vmap_area to mark vm_map_ram area. It cold be
risky reusing the vm union in vmap_area in RFC. I will consider
reusing the union in vmap_area to save memory later. Now just take the
simpler way to let's focus on resolving the main problem.
- Add patch 4~7 for optimization.

Baoquan He (7):
mm/vmalloc.c: add used_map into vmap_block to track space of
vmap_block
mm/vmalloc.c: add flags to mark vm_map_ram area
mm/vmalloc.c: allow vread() to read out vm_map_ram areas
mm/vmalloc: explicitly identify vm_map_ram area when shown in
/proc/vmcoreinfo
mm/vmalloc: skip the uninitilized vmalloc areas
powerpc: mm: add VM_IOREMAP flag to the vmalloc area
sh: mm: set VM_IOREMAP flag to the vmalloc area

arch/powerpc/kernel/pci_64.c | 2 +-
arch/sh/kernel/cpu/sh4/sq.c | 2 +-
include/linux/vmalloc.h | 1 +
mm/vmalloc.c | 97 +++++++++++++++++++++++++++++++-----
4 files changed, 87 insertions(+), 15 deletions(-)

--
2.34.1


2022-12-04 02:05:00

by Baoquan He

[permalink] [raw]
Subject: [PATCH v1 3/7] mm/vmalloc.c: allow vread() to read out vm_map_ram areas

Currently, vread can read out vmalloc areas which is associated with
a vm_struct. While this doesn't work for areas created by vm_map_ram()
interface because it doesn't have an associated vm_struct. Then in vread(),
these areas will be skipped.

Here, add a new function vb_vread() to read out areas managed by
vmap_block specifically. Then recognize vm_map_ram areas via vmap->flags
and handle them respectively.

Signed-off-by: Baoquan He <[email protected]>
---
mm/vmalloc.c | 61 ++++++++++++++++++++++++++++++++++++++++++++++------
1 file changed, 54 insertions(+), 7 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index d6f376060d83..e6b46da3e044 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -3519,6 +3519,46 @@ static int aligned_vread(char *buf, char *addr, unsigned long count)
return copied;
}

+static void vb_vread(char *buf, char *addr, int count)
+{
+ char *start;
+ struct vmap_block *vb;
+ unsigned long offset;
+ unsigned int rs, re, n;
+
+ offset = ((unsigned long)addr & (VMAP_BLOCK_SIZE - 1)) >> PAGE_SHIFT;
+ vb = xa_load(&vmap_blocks, addr_to_vb_idx((unsigned long)addr));
+
+ spin_lock(&vb->lock);
+ if (bitmap_empty(vb->used_map, VMAP_BBMAP_BITS)) {
+ spin_unlock(&vb->lock);
+ memset(buf, 0, count);
+ return;
+ }
+ for_each_set_bitrange(rs, re, vb->used_map, VMAP_BBMAP_BITS) {
+ if (!count)
+ break;
+ start = vmap_block_vaddr(vb->va->va_start, rs);
+ if (addr < start) {
+ if (count == 0)
+ break;
+ *buf = '\0';
+ buf++;
+ addr++;
+ count--;
+ }
+ n = (re - rs + 1) << PAGE_SHIFT;
+ if (n > count)
+ n = count;
+ aligned_vread(buf, start, n);
+
+ buf += n;
+ addr += n;
+ count -= n;
+ }
+ spin_unlock(&vb->lock);
+}
+
/**
* vread() - read vmalloc area in a safe way.
* @buf: buffer for reading data
@@ -3549,7 +3589,7 @@ long vread(char *buf, char *addr, unsigned long count)
struct vm_struct *vm;
char *vaddr, *buf_start = buf;
unsigned long buflen = count;
- unsigned long n;
+ unsigned long n, size, flags;

addr = kasan_reset_tag(addr);

@@ -3570,12 +3610,16 @@ long vread(char *buf, char *addr, unsigned long count)
if (!count)
break;

- if (!va->vm)
+ vm = va->vm;
+ flags = va->flags & VMAP_FLAGS_MASK;
+
+ if (!vm && !flags)
continue;

- vm = va->vm;
- vaddr = (char *) vm->addr;
- if (addr >= vaddr + get_vm_area_size(vm))
+ vaddr = (char *) va->va_start;
+ size = flags ? va_size(va) : get_vm_area_size(vm);
+
+ if (addr >= vaddr + size)
continue;
while (addr < vaddr) {
if (count == 0)
@@ -3585,10 +3629,13 @@ long vread(char *buf, char *addr, unsigned long count)
addr++;
count--;
}
- n = vaddr + get_vm_area_size(vm) - addr;
+ n = vaddr + size - addr;
if (n > count)
n = count;
- if (!(vm->flags & VM_IOREMAP))
+
+ if ((flags & (VMAP_RAM|VMAP_BLOCK)) == (VMAP_RAM|VMAP_BLOCK))
+ vb_vread(buf, addr, n);
+ else if ((flags & VMAP_RAM) || !(vm->flags & VM_IOREMAP))
aligned_vread(buf, addr, n);
else /* IOREMAP area is treated as memory hole */
memset(buf, 0, n);
--
2.34.1

2022-12-04 02:05:00

by Baoquan He

[permalink] [raw]
Subject: [PATCH v1 2/7] mm/vmalloc.c: add flags to mark vm_map_ram area

Through vmalloc API, a virtual kernel area is reserved for physical
address mapping. And vmap_area is used to track them, while vm_struct
is allocated to associate with the vmap_area to store more information
and passed out.

However, area reserved via vm_map_ram() is an exception. It doesn't have
vm_struct to associate with vmap_area. And we can't recognize the
vmap_area with '->vm == NULL' as a vm_map_ram() area because the normal
freeing path will set va->vm = NULL before unmapping, please see
function remove_vm_area().

Meanwhile, there are two types of vm_map_ram area. One is the whole
vmap_area being reserved and mapped at one time; the other is the
whole vmap_area with VMAP_BLOCK_SIZE size being reserved, while mapped
into split regions with smaller size several times via vb_alloc().

To mark the area reserved through vm_map_ram(), add flags field into
struct vmap_area. Bit 0 indicates whether it's a vm_map_ram area,
while bit 1 indicates whether it's a vmap_block type of vm_map_ram
area.

This is a preparatoin for later use.

Signed-off-by: Baoquan He <[email protected]>
---
include/linux/vmalloc.h | 1 +
mm/vmalloc.c | 18 +++++++++++++++++-
2 files changed, 18 insertions(+), 1 deletion(-)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 096d48aa3437..69250efa03d1 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -76,6 +76,7 @@ struct vmap_area {
unsigned long subtree_max_size; /* in "free" tree */
struct vm_struct *vm; /* in "busy" tree */
};
+ unsigned long flags; /* mark type of vm_map_ram area */
};

/* archs that select HAVE_ARCH_HUGE_VMAP should override one or more of these */
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 5d3fd3e6fe09..d6f376060d83 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1815,6 +1815,7 @@ static void free_vmap_area_noflush(struct vmap_area *va)

spin_lock(&vmap_area_lock);
unlink_va(va, &vmap_area_root);
+ va->flags = 0;
spin_unlock(&vmap_area_lock);

nr_lazy = atomic_long_add_return((va->va_end - va->va_start) >>
@@ -1887,6 +1888,10 @@ struct vmap_area *find_vmap_area(unsigned long addr)

#define VMAP_BLOCK_SIZE (VMAP_BBMAP_BITS * PAGE_SIZE)

+#define VMAP_RAM 0x1
+#define VMAP_BLOCK 0x2
+#define VMAP_FLAGS_MASK 0x3
+
struct vmap_block_queue {
spinlock_t lock;
struct list_head free;
@@ -1967,6 +1972,9 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
kfree(vb);
return ERR_CAST(va);
}
+ spin_lock(&vmap_area_lock);
+ va->flags = VMAP_RAM|VMAP_BLOCK;
+ spin_unlock(&vmap_area_lock);

vaddr = vmap_block_vaddr(va->va_start, 0);
spin_lock_init(&vb->lock);
@@ -2229,8 +2237,12 @@ void vm_unmap_ram(const void *mem, unsigned int count)
return;
}

- va = find_vmap_area(addr);
+ spin_lock(&vmap_area_lock);
+ va = __find_vmap_area((unsigned long)addr, &vmap_area_root);
BUG_ON(!va);
+ if (va)
+ va->flags &= ~VMAP_RAM;
+ spin_unlock(&vmap_area_lock);
debug_check_no_locks_freed((void *)va->va_start,
(va->va_end - va->va_start));
free_unmap_vmap_area(va);
@@ -2269,6 +2281,10 @@ void *vm_map_ram(struct page **pages, unsigned int count, int node)
if (IS_ERR(va))
return NULL;

+ spin_lock(&vmap_area_lock);
+ va->flags = VMAP_RAM;
+ spin_unlock(&vmap_area_lock);
+
addr = va->va_start;
mem = (void *)addr;
}
--
2.34.1

2022-12-04 02:05:18

by Baoquan He

[permalink] [raw]
Subject: [PATCH v1 7/7] sh: mm: set VM_IOREMAP flag to the vmalloc area

Currently, for vmalloc areas with flag VM_IOREMAP set, except of the
specific alignment clamping in __get_vm_area_node(), they will be
1) Shown as ioremap in /proc/vmallocinfo;
2) Ignored by /proc/kcore reading via vread()

So for the ioremap in __sq_remap() of sh, we should set VM_IOREMAP
in flag to make it handled correctly as above.

Signed-off-by: Baoquan He <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: [email protected] (open list:SUPERH)
---
arch/sh/kernel/cpu/sh4/sq.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/sh/kernel/cpu/sh4/sq.c b/arch/sh/kernel/cpu/sh4/sq.c
index a76b94e41e91..27f2e3da5aa2 100644
--- a/arch/sh/kernel/cpu/sh4/sq.c
+++ b/arch/sh/kernel/cpu/sh4/sq.c
@@ -103,7 +103,7 @@ static int __sq_remap(struct sq_mapping *map, pgprot_t prot)
#if defined(CONFIG_MMU)
struct vm_struct *vma;

- vma = __get_vm_area_caller(map->size, VM_ALLOC, map->sq_addr,
+ vma = __get_vm_area_caller(map->size, VM_IOREMAP, map->sq_addr,
SQ_ADDRMAX, __builtin_return_address(0));
if (!vma)
return -ENOMEM;
--
2.34.1

2022-12-04 02:05:17

by Baoquan He

[permalink] [raw]
Subject: [PATCH v1 6/7] powerpc: mm: add VM_IOREMAP flag to the vmalloc area

Currently, for vmalloc areas with flag VM_IOREMAP set, except of the
specific alignment clamping in __get_vm_area_node(), they will be
1) Shown as ioremap in /proc/vmallocinfo;
2) Ignored by /proc/kcore reading via vread()

So for the io mapping in ioremap_phb() of ppc, we should set VM_IOREMAP
in flag to make it handled correctly as above.

Signed-off-by: Baoquan He <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: "Pali Rohár" <[email protected]>
Cc: [email protected]
---
arch/powerpc/kernel/pci_64.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/pci_64.c b/arch/powerpc/kernel/pci_64.c
index 0c7cfb9fab04..fd42059ae2a5 100644
--- a/arch/powerpc/kernel/pci_64.c
+++ b/arch/powerpc/kernel/pci_64.c
@@ -132,7 +132,7 @@ void __iomem *ioremap_phb(phys_addr_t paddr, unsigned long size)
* address decoding but I'd rather not deal with those outside of the
* reserved 64K legacy region.
*/
- area = __get_vm_area_caller(size, 0, PHB_IO_BASE, PHB_IO_END,
+ area = __get_vm_area_caller(size, VM_IOREMAP, PHB_IO_BASE, PHB_IO_END,
__builtin_return_address(0));
if (!area)
return NULL;
--
2.34.1

2022-12-04 02:24:20

by Baoquan He

[permalink] [raw]
Subject: [PATCH v1 4/7] mm/vmalloc: explicitly identify vm_map_ram area when shown in /proc/vmcoreinfo

Now, by marking VMAP_RAM in vmap_area->flags for vm_map_ram area, we
can clearly differentiate it with other vmalloc areas. So in s_show(),
change the ambiguous checking 'if (!va->vm) to clear
"if (!va->vm && (va->flags & VMAP_RAM))". This let's picks out
vm_map_ram area, and avoids the being unmapped normal vmalloc area to
be mistakenly recognized as vm_map_ram area.

Meanwhile, the code comment above vm_map_ram area checking in s_show()
is not needed any more, remove it here.

Signed-off-by: Baoquan He <[email protected]>
---
mm/vmalloc.c | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index e6b46da3e044..3c60026fb162 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -4181,11 +4181,7 @@ static int s_show(struct seq_file *m, void *p)

va = list_entry(p, struct vmap_area, list);

- /*
- * s_show can encounter race with remove_vm_area, !vm on behalf
- * of vmap area is being tear down or vm_map_ram allocation.
- */
- if (!va->vm) {
+ if (!va->vm && (va->flags & VMAP_RAM)) {
seq_printf(m, "0x%pK-0x%pK %7ld vm_map_ram\n",
(void *)va->va_start, (void *)va->va_end,
va->va_end - va->va_start);
--
2.34.1

2022-12-04 04:55:46

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v1 3/7] mm/vmalloc.c: allow vread() to read out vm_map_ram areas

Hi Baoquan,

I love your patch! Perhaps something to improve:

[auto build test WARNING on akpm-mm/mm-everything]

url: https://github.com/intel-lab-lkp/linux/commits/Baoquan-He/mm-vmalloc-c-allow-vread-to-read-out-vm_map_ram-areas/20221204-093322
base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link: https://lore.kernel.org/r/20221204013046.154960-4-bhe%40redhat.com
patch subject: [PATCH v1 3/7] mm/vmalloc.c: allow vread() to read out vm_map_ram areas
config: arm-randconfig-r046-20221204
compiler: arm-linux-gnueabi-gcc (GCC) 12.1.0
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/intel-lab-lkp/linux/commit/0bcc4ce1e46418b86eb569175879081116649727
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review Baoquan-He/mm-vmalloc-c-allow-vread-to-read-out-vm_map_ram-areas/20221204-093322
git checkout 0bcc4ce1e46418b86eb569175879081116649727
# save the config file
mkdir build_dir && cp config build_dir/.config
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=arm SHELL=/bin/bash

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <[email protected]>

All warnings (new ones prefixed by >>):

mm/vmalloc.c: In function 'vb_vread':
>> mm/vmalloc.c:3540:23: warning: variable 'offset' set but not used [-Wunused-but-set-variable]
3540 | unsigned long offset;
| ^~~~~~


vim +/offset +3540 mm/vmalloc.c

3535
3536 static void vb_vread(char *buf, char *addr, int count)
3537 {
3538 char *start;
3539 struct vmap_block *vb;
> 3540 unsigned long offset;
3541 unsigned int rs, re, n;
3542
3543 offset = ((unsigned long)addr & (VMAP_BLOCK_SIZE - 1)) >> PAGE_SHIFT;
3544 vb = xa_load(&vmap_blocks, addr_to_vb_idx((unsigned long)addr));
3545
3546 spin_lock(&vb->lock);
3547 if (bitmap_empty(vb->used_map, VMAP_BBMAP_BITS)) {
3548 spin_unlock(&vb->lock);
3549 memset(buf, 0, count);
3550 return;
3551 }
3552 for_each_set_bitrange(rs, re, vb->used_map, VMAP_BBMAP_BITS) {
3553 if (!count)
3554 break;
3555 start = vmap_block_vaddr(vb->va->va_start, rs);
3556 if (addr < start) {
3557 if (count == 0)
3558 break;
3559 *buf = '\0';
3560 buf++;
3561 addr++;
3562 count--;
3563 }
3564 n = (re - rs + 1) << PAGE_SHIFT;
3565 if (n > count)
3566 n = count;
3567 aligned_vread(buf, start, n);
3568
3569 buf += n;
3570 addr += n;
3571 count -= n;
3572 }
3573 spin_unlock(&vb->lock);
3574 }
3575

--
0-DAY CI Kernel Test Service
https://01.org/lkp


Attachments:
(No filename) (2.91 kB)
config (170.44 kB)
Download all attachments

2022-12-05 13:43:00

by Uladzislau Rezki

[permalink] [raw]
Subject: Re: [PATCH v1 2/7] mm/vmalloc.c: add flags to mark vm_map_ram area

> Through vmalloc API, a virtual kernel area is reserved for physical
> address mapping. And vmap_area is used to track them, while vm_struct
> is allocated to associate with the vmap_area to store more information
> and passed out.
>
> However, area reserved via vm_map_ram() is an exception. It doesn't have
> vm_struct to associate with vmap_area. And we can't recognize the
> vmap_area with '->vm == NULL' as a vm_map_ram() area because the normal
> freeing path will set va->vm = NULL before unmapping, please see
> function remove_vm_area().
>
> Meanwhile, there are two types of vm_map_ram area. One is the whole
> vmap_area being reserved and mapped at one time; the other is the
> whole vmap_area with VMAP_BLOCK_SIZE size being reserved, while mapped
> into split regions with smaller size several times via vb_alloc().
>
> To mark the area reserved through vm_map_ram(), add flags field into
> struct vmap_area. Bit 0 indicates whether it's a vm_map_ram area,
> while bit 1 indicates whether it's a vmap_block type of vm_map_ram
> area.
>
> This is a preparatoin for later use.
>
> Signed-off-by: Baoquan He <[email protected]>
> ---
> include/linux/vmalloc.h | 1 +
> mm/vmalloc.c | 18 +++++++++++++++++-
> 2 files changed, 18 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
> index 096d48aa3437..69250efa03d1 100644
> --- a/include/linux/vmalloc.h
> +++ b/include/linux/vmalloc.h
> @@ -76,6 +76,7 @@ struct vmap_area {
> unsigned long subtree_max_size; /* in "free" tree */
> struct vm_struct *vm; /* in "busy" tree */
> };
> + unsigned long flags; /* mark type of vm_map_ram area */
> };
>
> /* archs that select HAVE_ARCH_HUGE_VMAP should override one or more of these */
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 5d3fd3e6fe09..d6f376060d83 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -1815,6 +1815,7 @@ static void free_vmap_area_noflush(struct vmap_area *va)
>
> spin_lock(&vmap_area_lock);
> unlink_va(va, &vmap_area_root);
> + va->flags = 0;
> spin_unlock(&vmap_area_lock);
>
This is not a good place to set flags to zero. It looks to me like
corner and kind of specific.


> nr_lazy = atomic_long_add_return((va->va_end - va->va_start) >>
> @@ -1887,6 +1888,10 @@ struct vmap_area *find_vmap_area(unsigned long addr)
>
> #define VMAP_BLOCK_SIZE (VMAP_BBMAP_BITS * PAGE_SIZE)
>
> +#define VMAP_RAM 0x1
> +#define VMAP_BLOCK 0x2
> +#define VMAP_FLAGS_MASK 0x3
> +
> struct vmap_block_queue {
> spinlock_t lock;
> struct list_head free;
> @@ -1967,6 +1972,9 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
> kfree(vb);
> return ERR_CAST(va);
> }
> + spin_lock(&vmap_area_lock);
> + va->flags = VMAP_RAM|VMAP_BLOCK;
> + spin_unlock(&vmap_area_lock);
>
The per-cpu code was created as a fast per-cpu allocator because of high
vmalloc lock contention. If possible we should avoid of locking of the
vmap_area_lock. Because it has a high contention.

>
> vaddr = vmap_block_vaddr(va->va_start, 0);
> spin_lock_init(&vb->lock);
> @@ -2229,8 +2237,12 @@ void vm_unmap_ram(const void *mem, unsigned int count)
> return;
> }
>
> - va = find_vmap_area(addr);
> + spin_lock(&vmap_area_lock);
> + va = __find_vmap_area((unsigned long)addr, &vmap_area_root);
> BUG_ON(!va);
> + if (va)
> + va->flags &= ~VMAP_RAM;
> + spin_unlock(&vmap_area_lock);
> debug_check_no_locks_freed((void *)va->va_start,
> (va->va_end - va->va_start));
> free_unmap_vmap_area(va);
> @@ -2269,6 +2281,10 @@ void *vm_map_ram(struct page **pages, unsigned int count, int node)
> if (IS_ERR(va))
> return NULL;
>
> + spin_lock(&vmap_area_lock);
> + va->flags = VMAP_RAM;
> + spin_unlock(&vmap_area_lock);
> +
>
Same here.

--
Uladzislau Rezki

2022-12-07 08:36:36

by Baoquan He

[permalink] [raw]
Subject: Re: [PATCH v1 2/7] mm/vmalloc.c: add flags to mark vm_map_ram area

On 12/05/22 at 01:56pm, Uladzislau Rezki wrote:
> > Through vmalloc API, a virtual kernel area is reserved for physical
> > address mapping. And vmap_area is used to track them, while vm_struct
> > is allocated to associate with the vmap_area to store more information
> > and passed out.
> >
> > However, area reserved via vm_map_ram() is an exception. It doesn't have
> > vm_struct to associate with vmap_area. And we can't recognize the
> > vmap_area with '->vm == NULL' as a vm_map_ram() area because the normal
> > freeing path will set va->vm = NULL before unmapping, please see
> > function remove_vm_area().
> >
> > Meanwhile, there are two types of vm_map_ram area. One is the whole
> > vmap_area being reserved and mapped at one time; the other is the
> > whole vmap_area with VMAP_BLOCK_SIZE size being reserved, while mapped
> > into split regions with smaller size several times via vb_alloc().
> >
> > To mark the area reserved through vm_map_ram(), add flags field into
> > struct vmap_area. Bit 0 indicates whether it's a vm_map_ram area,
> > while bit 1 indicates whether it's a vmap_block type of vm_map_ram
> > area.
> >
> > This is a preparatoin for later use.
> >
> > Signed-off-by: Baoquan He <[email protected]>
> > ---
> > include/linux/vmalloc.h | 1 +
> > mm/vmalloc.c | 18 +++++++++++++++++-
> > 2 files changed, 18 insertions(+), 1 deletion(-)
> >
> > diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
> > index 096d48aa3437..69250efa03d1 100644
> > --- a/include/linux/vmalloc.h
> > +++ b/include/linux/vmalloc.h
> > @@ -76,6 +76,7 @@ struct vmap_area {
> > unsigned long subtree_max_size; /* in "free" tree */
> > struct vm_struct *vm; /* in "busy" tree */
> > };
> > + unsigned long flags; /* mark type of vm_map_ram area */
> > };
> >
> > /* archs that select HAVE_ARCH_HUGE_VMAP should override one or more of these */
> > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > index 5d3fd3e6fe09..d6f376060d83 100644
> > --- a/mm/vmalloc.c
> > +++ b/mm/vmalloc.c
> > @@ -1815,6 +1815,7 @@ static void free_vmap_area_noflush(struct vmap_area *va)
> >
> > spin_lock(&vmap_area_lock);
> > unlink_va(va, &vmap_area_root);
> > + va->flags = 0;
> > spin_unlock(&vmap_area_lock);
> >
> This is not a good place to set flags to zero. It looks to me like
> corner and kind of specific.

Thanks for reviewing.

Here, I thought to clear VMAP_RAM|VMAP_BLOCK on vmap->flags when free
the vmap_block. I didn't find a good place to do the clearing. When we
call free_vmap_block(), we either come from purge_fragmented_blocks(),
or from vb_free(). In vb_free(), it will call free_vmap_block() when
the whole vmap_block is dirty. In purge_fragmented_blocks(), it will
try to purge all vmap_block which only has dirty or free regions.
For both of above functions, they will call free_vmap_block() when
there's no being used region in the vmap_block.

purge_fragmented_blocks()
vb_free()
-->free_vmap_block()

So seems we don't need to clear the VMAP_RAM|VMAP_BLOCK on vmap->flags
because there's no mapping existed in the vmap_block. The consequent
free_vmap_block() will remove the relevant vmap_area from vmap_area_list
and vmap_area_root tree.

So I plan to remove code change in this place.
>
>
> > nr_lazy = atomic_long_add_return((va->va_end - va->va_start) >>
> > @@ -1887,6 +1888,10 @@ struct vmap_area *find_vmap_area(unsigned long addr)
> >
> > #define VMAP_BLOCK_SIZE (VMAP_BBMAP_BITS * PAGE_SIZE)
> >
> > +#define VMAP_RAM 0x1
> > +#define VMAP_BLOCK 0x2
> > +#define VMAP_FLAGS_MASK 0x3
> > +
> > struct vmap_block_queue {
> > spinlock_t lock;
> > struct list_head free;
> > @@ -1967,6 +1972,9 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
> > kfree(vb);
> > return ERR_CAST(va);
> > }
> > + spin_lock(&vmap_area_lock);
> > + va->flags = VMAP_RAM|VMAP_BLOCK;
> > + spin_unlock(&vmap_area_lock);
> >
> The per-cpu code was created as a fast per-cpu allocator because of high
> vmalloc lock contention. If possible we should avoid of locking of the
> vmap_area_lock. Because it has a high contention.

Fair enough. I made below draft patch to address the concern. By
adding argument va_flags to alloc_vmap_area(), we can pass the
vm_map_ram flags into alloc_vmap_area and filled into vmap_area->flags.
With this, we don't need add extra action to acquire vmap_area_root lock
and do the flags setting. Is it OK to you?

From 115f6080b339d0cf9dd20c5f6c0d3121f6b22274 Mon Sep 17 00:00:00 2001
From: Baoquan He <[email protected]>
Date: Wed, 7 Dec 2022 11:08:14 +0800
Subject: [PATCH] mm/vmalloc: change alloc_vmap_area() to pass in va_flags

With this change, we can pass and set vmap_area->flags for vm_map_ram area
in alloc_vmap_area(). Then no extra action need be added to acquire
vmap_area_lock when doing the vmap_area->flags setting.

Signed-off-by: Baoquan He <[email protected]>
---
mm/vmalloc.c | 13 +++++++++----
1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index ccaa461998f3..d74eddec352f 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1586,7 +1586,9 @@ preload_this_cpu_lock(spinlock_t *lock, gfp_t gfp_mask, int node)
static struct vmap_area *alloc_vmap_area(unsigned long size,
unsigned long align,
unsigned long vstart, unsigned long vend,
- int node, gfp_t gfp_mask)
+ int node, gfp_t gfp_mask,
+ unsigned long va_flags)
+)
{
struct vmap_area *va;
unsigned long freed;
@@ -1630,6 +1632,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
va->va_start = addr;
va->va_end = addr + size;
va->vm = NULL;
+ va->flags = va_flags;

spin_lock(&vmap_area_lock);
insert_vmap_area(va, &vmap_area_root, &vmap_area_list);
@@ -1961,7 +1964,8 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)

va = alloc_vmap_area(VMAP_BLOCK_SIZE, VMAP_BLOCK_SIZE,
VMALLOC_START, VMALLOC_END,
- node, gfp_mask);
+ node, gfp_mask,
+ VMAP_RAM|VMAP_BLOCK);
if (IS_ERR(va)) {
kfree(vb);
return ERR_CAST(va);
@@ -2258,7 +2262,8 @@ void *vm_map_ram(struct page **pages, unsigned int count, int node)
} else {
struct vmap_area *va;
va = alloc_vmap_area(size, PAGE_SIZE,
- VMALLOC_START, VMALLOC_END, node, GFP_KERNEL);
+ VMALLOC_START, VMALLOC_END,
+ node, GFP_KERNEL, VMAP_RAM|VMAP_BLOCK);
if (IS_ERR(va))
return NULL;

@@ -2498,7 +2503,7 @@ static struct vm_struct *__get_vm_area_node(unsigned long size,
if (!(flags & VM_NO_GUARD))
size += PAGE_SIZE;

- va = alloc_vmap_area(size, align, start, end, node, gfp_mask);
+ va = alloc_vmap_area(size, align, start, end, node, gfp_mask, 0);
if (IS_ERR(va)) {
kfree(area);
return NULL;
--
2.34.1

2022-12-08 20:15:24

by Uladzislau Rezki

[permalink] [raw]
Subject: Re: [PATCH v1 2/7] mm/vmalloc.c: add flags to mark vm_map_ram area

On Wed, Dec 07, 2022 at 04:03:41PM +0800, Baoquan He wrote:
> On 12/05/22 at 01:56pm, Uladzislau Rezki wrote:
> > > Through vmalloc API, a virtual kernel area is reserved for physical
> > > address mapping. And vmap_area is used to track them, while vm_struct
> > > is allocated to associate with the vmap_area to store more information
> > > and passed out.
> > >
> > > However, area reserved via vm_map_ram() is an exception. It doesn't have
> > > vm_struct to associate with vmap_area. And we can't recognize the
> > > vmap_area with '->vm == NULL' as a vm_map_ram() area because the normal
> > > freeing path will set va->vm = NULL before unmapping, please see
> > > function remove_vm_area().
> > >
> > > Meanwhile, there are two types of vm_map_ram area. One is the whole
> > > vmap_area being reserved and mapped at one time; the other is the
> > > whole vmap_area with VMAP_BLOCK_SIZE size being reserved, while mapped
> > > into split regions with smaller size several times via vb_alloc().
> > >
> > > To mark the area reserved through vm_map_ram(), add flags field into
> > > struct vmap_area. Bit 0 indicates whether it's a vm_map_ram area,
> > > while bit 1 indicates whether it's a vmap_block type of vm_map_ram
> > > area.
> > >
> > > This is a preparatoin for later use.
> > >
> > > Signed-off-by: Baoquan He <[email protected]>
> > > ---
> > > include/linux/vmalloc.h | 1 +
> > > mm/vmalloc.c | 18 +++++++++++++++++-
> > > 2 files changed, 18 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
> > > index 096d48aa3437..69250efa03d1 100644
> > > --- a/include/linux/vmalloc.h
> > > +++ b/include/linux/vmalloc.h
> > > @@ -76,6 +76,7 @@ struct vmap_area {
> > > unsigned long subtree_max_size; /* in "free" tree */
> > > struct vm_struct *vm; /* in "busy" tree */
> > > };
> > > + unsigned long flags; /* mark type of vm_map_ram area */
> > > };
> > >
> > > /* archs that select HAVE_ARCH_HUGE_VMAP should override one or more of these */
> > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > > index 5d3fd3e6fe09..d6f376060d83 100644
> > > --- a/mm/vmalloc.c
> > > +++ b/mm/vmalloc.c
> > > @@ -1815,6 +1815,7 @@ static void free_vmap_area_noflush(struct vmap_area *va)
> > >
> > > spin_lock(&vmap_area_lock);
> > > unlink_va(va, &vmap_area_root);
> > > + va->flags = 0;
> > > spin_unlock(&vmap_area_lock);
> > >
> > This is not a good place to set flags to zero. It looks to me like
> > corner and kind of specific.
>
> Thanks for reviewing.
>
> Here, I thought to clear VMAP_RAM|VMAP_BLOCK on vmap->flags when free
> the vmap_block. I didn't find a good place to do the clearing. When we
> call free_vmap_block(), we either come from purge_fragmented_blocks(),
> or from vb_free(). In vb_free(), it will call free_vmap_block() when
> the whole vmap_block is dirty. In purge_fragmented_blocks(), it will
> try to purge all vmap_block which only has dirty or free regions.
> For both of above functions, they will call free_vmap_block() when
> there's no being used region in the vmap_block.
>
> purge_fragmented_blocks()
> vb_free()
> -->free_vmap_block()
>
> So seems we don't need to clear the VMAP_RAM|VMAP_BLOCK on vmap->flags
> because there's no mapping existed in the vmap_block. The consequent
> free_vmap_block() will remove the relevant vmap_area from vmap_area_list
> and vmap_area_root tree.
>
> So I plan to remove code change in this place.
> >
> >
> > > nr_lazy = atomic_long_add_return((va->va_end - va->va_start) >>
> > > @@ -1887,6 +1888,10 @@ struct vmap_area *find_vmap_area(unsigned long addr)
> > >
> > > #define VMAP_BLOCK_SIZE (VMAP_BBMAP_BITS * PAGE_SIZE)
> > >
> > > +#define VMAP_RAM 0x1
> > > +#define VMAP_BLOCK 0x2
> > > +#define VMAP_FLAGS_MASK 0x3
> > > +
> > > struct vmap_block_queue {
> > > spinlock_t lock;
> > > struct list_head free;
> > > @@ -1967,6 +1972,9 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
> > > kfree(vb);
> > > return ERR_CAST(va);
> > > }
> > > + spin_lock(&vmap_area_lock);
> > > + va->flags = VMAP_RAM|VMAP_BLOCK;
> > > + spin_unlock(&vmap_area_lock);
> > >
> > The per-cpu code was created as a fast per-cpu allocator because of high
> > vmalloc lock contention. If possible we should avoid of locking of the
> > vmap_area_lock. Because it has a high contention.
>
> Fair enough. I made below draft patch to address the concern. By
> adding argument va_flags to alloc_vmap_area(), we can pass the
> vm_map_ram flags into alloc_vmap_area and filled into vmap_area->flags.
> With this, we don't need add extra action to acquire vmap_area_root lock
> and do the flags setting. Is it OK to you?
>
> From 115f6080b339d0cf9dd20c5f6c0d3121f6b22274 Mon Sep 17 00:00:00 2001
> From: Baoquan He <[email protected]>
> Date: Wed, 7 Dec 2022 11:08:14 +0800
> Subject: [PATCH] mm/vmalloc: change alloc_vmap_area() to pass in va_flags
>
> With this change, we can pass and set vmap_area->flags for vm_map_ram area
> in alloc_vmap_area(). Then no extra action need be added to acquire
> vmap_area_lock when doing the vmap_area->flags setting.
>
> Signed-off-by: Baoquan He <[email protected]>
> ---
> mm/vmalloc.c | 13 +++++++++----
> 1 file changed, 9 insertions(+), 4 deletions(-)
>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index ccaa461998f3..d74eddec352f 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -1586,7 +1586,9 @@ preload_this_cpu_lock(spinlock_t *lock, gfp_t gfp_mask, int node)
> static struct vmap_area *alloc_vmap_area(unsigned long size,
> unsigned long align,
> unsigned long vstart, unsigned long vend,
> - int node, gfp_t gfp_mask)
> + int node, gfp_t gfp_mask,
> + unsigned long va_flags)
> +)
> {
> struct vmap_area *va;
> unsigned long freed;
> @@ -1630,6 +1632,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
> va->va_start = addr;
> va->va_end = addr + size;
> va->vm = NULL;
> + va->flags = va_flags;
>
> spin_lock(&vmap_area_lock);
> insert_vmap_area(va, &vmap_area_root, &vmap_area_list);
> @@ -1961,7 +1964,8 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
>
> va = alloc_vmap_area(VMAP_BLOCK_SIZE, VMAP_BLOCK_SIZE,
> VMALLOC_START, VMALLOC_END,
> - node, gfp_mask);
> + node, gfp_mask,
> + VMAP_RAM|VMAP_BLOCK);
> if (IS_ERR(va)) {
> kfree(vb);
> return ERR_CAST(va);
> @@ -2258,7 +2262,8 @@ void *vm_map_ram(struct page **pages, unsigned int count, int node)
> } else {
> struct vmap_area *va;
> va = alloc_vmap_area(size, PAGE_SIZE,
> - VMALLOC_START, VMALLOC_END, node, GFP_KERNEL);
> + VMALLOC_START, VMALLOC_END,
> + node, GFP_KERNEL, VMAP_RAM|VMAP_BLOCK);
> if (IS_ERR(va))
> return NULL;
>
> @@ -2498,7 +2503,7 @@ static struct vm_struct *__get_vm_area_node(unsigned long size,
> if (!(flags & VM_NO_GUARD))
> size += PAGE_SIZE;
>
> - va = alloc_vmap_area(size, align, start, end, node, gfp_mask);
> + va = alloc_vmap_area(size, align, start, end, node, gfp_mask, 0);
> if (IS_ERR(va)) {
> kfree(area);
> return NULL;
> --
> 2.34.1
>
Yes, this is better than it was before. Adding an extra parameter makes
it more valid and logical.

--
Uladzislau Rezki

2022-12-09 08:54:35

by Baoquan He

[permalink] [raw]
Subject: Re: [PATCH v1 2/7] mm/vmalloc.c: add flags to mark vm_map_ram area

On 12/08/22 at 08:52pm, Uladzislau Rezki wrote:
> On Wed, Dec 07, 2022 at 04:03:41PM +0800, Baoquan He wrote:
......
> > > > @@ -1967,6 +1972,9 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
> > > > kfree(vb);
> > > > return ERR_CAST(va);
> > > > }
> > > > + spin_lock(&vmap_area_lock);
> > > > + va->flags = VMAP_RAM|VMAP_BLOCK;
> > > > + spin_unlock(&vmap_area_lock);
> > > >
> > > The per-cpu code was created as a fast per-cpu allocator because of high
> > > vmalloc lock contention. If possible we should avoid of locking of the
> > > vmap_area_lock. Because it has a high contention.
> >
> > Fair enough. I made below draft patch to address the concern. By
> > adding argument va_flags to alloc_vmap_area(), we can pass the
> > vm_map_ram flags into alloc_vmap_area and filled into vmap_area->flags.
> > With this, we don't need add extra action to acquire vmap_area_root lock
> > and do the flags setting. Is it OK to you?
> >
> > From 115f6080b339d0cf9dd20c5f6c0d3121f6b22274 Mon Sep 17 00:00:00 2001
> > From: Baoquan He <[email protected]>
> > Date: Wed, 7 Dec 2022 11:08:14 +0800
> > Subject: [PATCH] mm/vmalloc: change alloc_vmap_area() to pass in va_flags
> >
> > With this change, we can pass and set vmap_area->flags for vm_map_ram area
> > in alloc_vmap_area(). Then no extra action need be added to acquire
> > vmap_area_lock when doing the vmap_area->flags setting.
> >
> > Signed-off-by: Baoquan He <[email protected]>
> > ---
> > mm/vmalloc.c | 13 +++++++++----
> > 1 file changed, 9 insertions(+), 4 deletions(-)
> >
> > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > index ccaa461998f3..d74eddec352f 100644
> > --- a/mm/vmalloc.c
> > +++ b/mm/vmalloc.c
> > @@ -1586,7 +1586,9 @@ preload_this_cpu_lock(spinlock_t *lock, gfp_t gfp_mask, int node)
> > static struct vmap_area *alloc_vmap_area(unsigned long size,
> > unsigned long align,
> > unsigned long vstart, unsigned long vend,
> > - int node, gfp_t gfp_mask)
> > + int node, gfp_t gfp_mask,
> > + unsigned long va_flags)
> > +)
> > {
> > struct vmap_area *va;
> > unsigned long freed;
> > @@ -1630,6 +1632,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
> > va->va_start = addr;
> > va->va_end = addr + size;
> > va->vm = NULL;
> > + va->flags = va_flags;
> >
> > spin_lock(&vmap_area_lock);
> > insert_vmap_area(va, &vmap_area_root, &vmap_area_list);
> > @@ -1961,7 +1964,8 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
> >
> > va = alloc_vmap_area(VMAP_BLOCK_SIZE, VMAP_BLOCK_SIZE,
> > VMALLOC_START, VMALLOC_END,
> > - node, gfp_mask);
> > + node, gfp_mask,
> > + VMAP_RAM|VMAP_BLOCK);
> > if (IS_ERR(va)) {
> > kfree(vb);
> > return ERR_CAST(va);
> > @@ -2258,7 +2262,8 @@ void *vm_map_ram(struct page **pages, unsigned int count, int node)
> > } else {
> > struct vmap_area *va;
> > va = alloc_vmap_area(size, PAGE_SIZE,
> > - VMALLOC_START, VMALLOC_END, node, GFP_KERNEL);
> > + VMALLOC_START, VMALLOC_END,
> > + node, GFP_KERNEL, VMAP_RAM|VMAP_BLOCK);
> > if (IS_ERR(va))
> > return NULL;
> >
> > @@ -2498,7 +2503,7 @@ static struct vm_struct *__get_vm_area_node(unsigned long size,
> > if (!(flags & VM_NO_GUARD))
> > size += PAGE_SIZE;
> >
> > - va = alloc_vmap_area(size, align, start, end, node, gfp_mask);
> > + va = alloc_vmap_area(size, align, start, end, node, gfp_mask, 0);
> > if (IS_ERR(va)) {
> > kfree(area);
> > return NULL;
> > --
> > 2.34.1
> >
> Yes, this is better than it was before. Adding an extra parameter makes
> it more valid and logical.

That's great. I will add this in v2.

2022-12-17 01:49:34

by Baoquan He

[permalink] [raw]
Subject: Re: [PATCH v1 3/7] mm/vmalloc.c: allow vread() to read out vm_map_ram areas

On 12/04/22 at 11:47am, kernel test robot wrote:
> Hi Baoquan,
>
> I love your patch! Perhaps something to improve:
>
> [auto build test WARNING on akpm-mm/mm-everything]
>
> url: https://github.com/intel-lab-lkp/linux/commits/Baoquan-He/mm-vmalloc-c-allow-vread-to-read-out-vm_map_ram-areas/20221204-093322
> base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
> patch link: https://lore.kernel.org/r/20221204013046.154960-4-bhe%40redhat.com
> patch subject: [PATCH v1 3/7] mm/vmalloc.c: allow vread() to read out vm_map_ram areas
> config: arm-randconfig-r046-20221204
> compiler: arm-linux-gnueabi-gcc (GCC) 12.1.0
> reproduce (this is a W=1 build):
> wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
> chmod +x ~/bin/make.cross
> # https://github.com/intel-lab-lkp/linux/commit/0bcc4ce1e46418b86eb569175879081116649727
> git remote add linux-review https://github.com/intel-lab-lkp/linux
> git fetch --no-tags linux-review Baoquan-He/mm-vmalloc-c-allow-vread-to-read-out-vm_map_ram-areas/20221204-093322
> git checkout 0bcc4ce1e46418b86eb569175879081116649727
> # save the config file
> mkdir build_dir && cp config build_dir/.config
> COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=arm SHELL=/bin/bash
>
> If you fix the issue, kindly add following tag where applicable
> | Reported-by: kernel test robot <[email protected]>
>
> All warnings (new ones prefixed by >>):
>
> mm/vmalloc.c: In function 'vb_vread':
> >> mm/vmalloc.c:3540:23: warning: variable 'offset' set but not used [-Wunused-but-set-variable]
> 3540 | unsigned long offset;
> | ^~~~~~

Thanks.

The local variable 'offset' is needed, the handling in vb_vread() need
be improved to cover the case in which reading is started from dirty or
free regions. I will add below change to v2.


diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 78cae59170d8..6612914459cf 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -3526,7 +3522,6 @@ static void vb_vread(char *buf, char *addr, int count)
unsigned long offset;
unsigned int rs, re, n;

- offset = ((unsigned long)addr & (VMAP_BLOCK_SIZE - 1)) >> PAGE_SHIFT;
vb = xa_load(&vmap_blocks, addr_to_vb_idx((unsigned long)addr));

spin_lock(&vb->lock);
@@ -3547,16 +3542,22 @@ static void vb_vread(char *buf, char *addr, int count)
addr++;
count--;
}
- n = (re - rs + 1) << PAGE_SHIFT;
+ /*it could start reading from the middle of used region*/
+ offset = offset_in_page(addr);
+ n = (re - rs + 1) << PAGE_SHIFT - offset;
if (n > count)
n = count;
- aligned_vread(buf, start, n);
+ aligned_vread(buf, start+offset, n);

buf += n;
addr += n;
count -= n;
}
spin_unlock(&vb->lock);
+
+ /* zero-fill the left dirty or free regions */
+ if (count)
+ memset(buf, 0, count);
}

/**