From: Vijayanand Jitta <[email protected]>
A potential use after free can occur in _vm_unmap_aliases
where an already freed vmap_area could be accessed, Consider
the following scenario:
Process 1 Process 2
__vm_unmap_aliases __vm_unmap_aliases
purge_fragmented_blocks_allcpus rcu_read_lock()
rcu_read_lock()
list_del_rcu(&vb->free_list)
list_for_each_entry_rcu(vb .. )
__purge_vmap_area_lazy
kmem_cache_free(va)
va_start = vb->va->va_start
Here Process 1 is in purge path and it does list_del_rcu on vmap_block
and later frees the vmap_area, since Process 2 was holding the rcu lock
at this time vmap_block will still be present in and Process 2 accesse
it and thereby it tries to access vmap_area of that vmap_block which was
already freed by Process 1 and this results in use after free.
Fix this by adding a check for vb->dirty before accessing vmap_area
structure since vb->dirty will be set to VMAP_BBMAP_BITS in purge path
checking for this will prevent the use after free.
Signed-off-by: Vijayanand Jitta <[email protected]>
---
mm/vmalloc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index d5f2a84..ebb6f57 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1762,7 +1762,7 @@ static void _vm_unmap_aliases(unsigned long start, unsigned long end, int flush)
rcu_read_lock();
list_for_each_entry_rcu(vb, &vbq->free, free_list) {
spin_lock(&vb->lock);
- if (vb->dirty) {
+ if (vb->dirty && vb->dirty != VMAP_BBMAP_BITS) {
unsigned long va_start = vb->va->va_start;
unsigned long s, e;
--
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation
2.7.4
On Thu, Mar 18, 2021 at 03:38:25PM +0530, [email protected] wrote:
> From: Vijayanand Jitta <[email protected]>
>
> A potential use after free can occur in _vm_unmap_aliases
> where an already freed vmap_area could be accessed, Consider
> the following scenario:
>
> Process 1 Process 2
>
> __vm_unmap_aliases __vm_unmap_aliases
> purge_fragmented_blocks_allcpus rcu_read_lock()
> rcu_read_lock()
> list_del_rcu(&vb->free_list)
> list_for_each_entry_rcu(vb .. )
> __purge_vmap_area_lazy
> kmem_cache_free(va)
> va_start = vb->va->va_start
Or maybe we should switch to kfree_rcu() instead of kmem_cache_free()?
--
Vlad Rezki
On 3/18/2021 10:29 PM, Uladzislau Rezki wrote:
> On Thu, Mar 18, 2021 at 03:38:25PM +0530, [email protected] wrote:
>> From: Vijayanand Jitta <[email protected]>
>>
>> A potential use after free can occur in _vm_unmap_aliases
>> where an already freed vmap_area could be accessed, Consider
>> the following scenario:
>>
>> Process 1 Process 2
>>
>> __vm_unmap_aliases __vm_unmap_aliases
>> purge_fragmented_blocks_allcpus rcu_read_lock()
>> rcu_read_lock()
>> list_del_rcu(&vb->free_list)
>> list_for_each_entry_rcu(vb .. )
>> __purge_vmap_area_lazy
>> kmem_cache_free(va)
>> va_start = vb->va->va_start
> Or maybe we should switch to kfree_rcu() instead of kmem_cache_free()?
>
> --
> Vlad Rezki
>
Thanks for suggestion.
I see free_vmap_area_lock (spinlock) is taken in __purge_vmap_area_lazy
while it loops through list and calls kmem_cache_free on va's. So, looks
like we can't replace it with kfree_rcu as it might cause scheduling
within atomic context.
Thanks,
Vijay
--
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a
member of Code Aurora Forum, hosted by The Linux Foundation
>
> On 3/18/2021 10:29 PM, Uladzislau Rezki wrote:
> > On Thu, Mar 18, 2021 at 03:38:25PM +0530, [email protected] wrote:
> >> From: Vijayanand Jitta <[email protected]>
> >>
> >> A potential use after free can occur in _vm_unmap_aliases
> >> where an already freed vmap_area could be accessed, Consider
> >> the following scenario:
> >>
> >> Process 1 Process 2
> >>
> >> __vm_unmap_aliases __vm_unmap_aliases
> >> purge_fragmented_blocks_allcpus rcu_read_lock()
> >> rcu_read_lock()
> >> list_del_rcu(&vb->free_list)
> >> list_for_each_entry_rcu(vb .. )
> >> __purge_vmap_area_lazy
> >> kmem_cache_free(va)
> >> va_start = vb->va->va_start
> > Or maybe we should switch to kfree_rcu() instead of kmem_cache_free()?
> >
> > --
> > Vlad Rezki
> >
>
> Thanks for suggestion.
>
> I see free_vmap_area_lock (spinlock) is taken in __purge_vmap_area_lazy
> while it loops through list and calls kmem_cache_free on va's. So, looks
> like we can't replace it with kfree_rcu as it might cause scheduling
> within atomic context.
>
A double argument of the kfree_rcu() is a safe way to be used from atomic
contexts, it does not use any sleeping primitives, so it can be replaced.
From the other hand i see that per-cpu KVA allocator is only one user of
the RCU and your change fixes it. Feel free to use:
Reviewed-by: Uladzislau Rezki (Sony) <[email protected]>
Thanks.
--
Vlad Rezki