Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935716AbdGTJY2 (ORCPT ); Thu, 20 Jul 2017 05:24:28 -0400 Received: from mail-pg0-f67.google.com ([74.125.83.67]:35570 "EHLO mail-pg0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933734AbdGTJYZ (ORCPT ); Thu, 20 Jul 2017 05:24:25 -0400 From: Zhaoyang Huang X-Google-Original-From: Zhaoyang Huang To: zhaoyang.huang@spreadtrum.com, Andrew Morton , Michal Hocko , Ingo Molnar , zijun_hu , Vlastimil Babka , Thomas Garnier , "Kirill A. Shutemov" , Andrey Ryabinin , linux-mm@kvack.org, linux-kernel@vger.kernel.org, zijun_hu@zoho.com Subject: [PATCH v4] mm/vmalloc: terminate searching since one node found Date: Thu, 20 Jul 2017 17:24:16 +0800 Message-Id: <1500542656-23332-1-git-send-email-zhaoyang.huang@spreadtrum.com> X-Mailer: git-send-email 1.7.9.5 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1804 Lines: 60 It is no need to find the very beginning of the area within alloc_vmap_area, which can be done by judging each node during the process free_vmap_cache miss: vmap_area_root / \ tmp_next U / (T1) tmp / ... (T2) / first vmap_area_list->first->......->tmp->tmp_next->...->vmap_area_list |-----(T3)----| Under the scenario of free_vmap_cache miss, total time consumption of finding the suitable hole is T = T1 + T2 + T3, while the commit decrease it to T1. In fact, 'vmalloc' always start from the fix address(VMALLOC_START),which will cause the 'first' to be close to the begining of the list(vmap_area_list) and make T3 to be big. The commit will especially help for a large and almost full vmalloc area. Whearas, it would NOT affect current quick approach such as free_vmap_cache, for it just take effect when free_vmap_cache miss and will reestablish it laterly. Signed-off-by: Zhaoyang Huang --- mm/vmalloc.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 8698c1c..f58f445 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -471,9 +471,20 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, while (n) { struct vmap_area *tmp; + struct vmap_area *tmp_next; tmp = rb_entry(n, struct vmap_area, rb_node); + tmp_next = list_next_entry(tmp, list); if (tmp->va_end >= addr) { first = tmp; + if (ALIGN(tmp->va_end, align) + size + < tmp_next->va_start) { + /* + * free_vmap_cache miss now,don't + * update cached_hole_size here, + * as __free_vmap_area does + */ + goto found; + } if (tmp->va_start <= addr) break; n = n->rb_left; -- 1.9.1