Received: by 10.192.165.148 with SMTP id m20csp4298771imm; Mon, 30 Apr 2018 15:53:41 -0700 (PDT) X-Google-Smtp-Source: AB8JxZoC8bPwVQPVARVhVeOuqpjFkaEVYup/uXCm+RavZqz3iKGiwKFM8r/RNkOy5TnF1X2EQNC8 X-Received: by 10.98.57.156 with SMTP id u28mr13574280pfj.95.1525128821046; Mon, 30 Apr 2018 15:53:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525128821; cv=none; d=google.com; s=arc-20160816; b=nZ5yl/ZG61L7fQJquVi8tSMpHkjZPh2idLxIwTcpsJtYZLwU6LRyVdmvtm1HIeuZ++ doFnkOsAwHG51hWLLQVJ9Z9sbu+HH5rovMJ6kyGEiJPXuFr58XwpMhTZXDjpugsFyKan snU7YWwPLqy4Ckf5pgpmUuiSSTfiYJaTCJmt/0AkbBlEW6dOYnj4dmpeqxAy6rsSRsjp ZB3Kqi9dUjzVvufYxlaWeBWdH7Y+73IqGsvntSQFtmh8d4Px4uOhpFn5VXRYm0CDjJfO JZg2CcsJ7g1MjIYohfg/8xcnXWPVXtDVCzU8cPMPZ/zxXZdJAOMzrFAEM13cnTUrwjHw 3RsA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date :arc-authentication-results; bh=YxqAgcjUR7gucsE62mDzeg3opJg7K7FDPtTskup/7bo=; b=vc/fEi+XUlchl7pj2WMwg3MkSaqPNwvyphB+u+YDwLfcmgNh4rgyvHBlx6Bm3TYrIh nXbrA/4k40GMW83EB4mKoszbJgPQQF4dAgD8MUbWGyPu3NRLgTxEKHpnDa9DsVXxznht 3sZT8MJ967TqREltq4yiAAU21Ku4krrGQGFlkLrRXqr+a0IT7Sp1Weg9gZP8GUDedPHk WJ1X53pnCSl4nHtxhxUlTvTKCCzJ1x+HB/azqS3QQd0lf3xb4PtVisoUPuxND+F8am+F r58Di5Eyp8Q80BZL9GXMrXWCn6Tx8PLPepRoZltsfLxnlcmjFJDWP7xAFktnzvfRO16I g/ug== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k75si7822788pfk.369.2018.04.30.15.53.26; Mon, 30 Apr 2018 15:53:40 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754183AbeD3WwK (ORCPT + 99 others); Mon, 30 Apr 2018 18:52:10 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:41914 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751384AbeD3WwJ (ORCPT ); Mon, 30 Apr 2018 18:52:09 -0400 Received: from akpm3.svl.corp.google.com (unknown [104.133.9.71]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 3F1F3483; Mon, 30 Apr 2018 22:52:08 +0000 (UTC) Date: Mon, 30 Apr 2018 15:52:07 -0700 From: Andrew Morton To: Chintan Pandya Cc: vbabka@suse.cz, labbott@redhat.com, catalin.marinas@arm.com, hannes@cmpxchg.org, f.fainelli@gmail.com, xieyisheng1@huawei.com, ard.biesheuvel@linaro.org, richard.weiyang@gmail.com, byungchul.park@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2] mm: vmalloc: Clean up vunmap to avoid pgtable ops twice Message-Id: <20180430155207.35a3dd94c31503c7a6268a8f@linux-foundation.org> In-Reply-To: <1523876342-10545-1-git-send-email-cpandya@codeaurora.org> References: <1523876342-10545-1-git-send-email-cpandya@codeaurora.org> X-Mailer: Sylpheed 3.6.0 (GTK+ 2.24.31; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 16 Apr 2018 16:29:02 +0530 Chintan Pandya wrote: > vunmap does page table clear operations twice in the > case when DEBUG_PAGEALLOC_ENABLE_DEFAULT is enabled. > > So, clean up the code as that is unintended. > > As a perf gain, we save few us. Below ftrace data was > obtained while doing 1 MB of vmalloc/vfree on ARM64 > based SoC *without* this patch applied. After this > patch, we can save ~3 us (on 1 extra vunmap_page_range). > > CPU DURATION FUNCTION CALLS > | | | | | | | > 6) | __vunmap() { > 6) | vmap_debug_free_range() { > 6) 3.281 us | vunmap_page_range(); > 6) + 45.468 us | } > 6) 2.760 us | vunmap_page_range(); > 6) ! 505.105 us | } It's been a long time since I looked at the vmap code :( > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -603,26 +603,6 @@ static void unmap_vmap_area(struct vmap_area *va) > vunmap_page_range(va->va_start, va->va_end); > } > > -static void vmap_debug_free_range(unsigned long start, unsigned long end) > -{ > - /* > - * Unmap page tables and force a TLB flush immediately if pagealloc > - * debugging is enabled. This catches use after free bugs similarly to > - * those in linear kernel virtual address space after a page has been > - * freed. > - * > - * All the lazy freeing logic is still retained, in order to minimise > - * intrusiveness of this debugging feature. > - * > - * This is going to be *slow* (linear kernel virtual address debugging > - * doesn't do a broadcast TLB flush so it is a lot faster). > - */ > - if (debug_pagealloc_enabled()) { > - vunmap_page_range(start, end); > - flush_tlb_kernel_range(start, end); > - } > -} > - > /* > * lazy_max_pages is the maximum amount of virtual address space we gather up > * before attempting to purge with a TLB flush. > @@ -756,6 +736,9 @@ static void free_unmap_vmap_area(struct vmap_area *va) > { > flush_cache_vunmap(va->va_start, va->va_end); > unmap_vmap_area(va); > + if (debug_pagealloc_enabled()) > + flush_tlb_kernel_range(va->va_start, va->va_end); > + > free_vmap_area_noflush(va); > } > > @@ -1142,7 +1125,6 @@ void vm_unmap_ram(const void *mem, unsigned int count) > BUG_ON(!PAGE_ALIGNED(addr)); > > debug_check_no_locks_freed(mem, size); > - vmap_debug_free_range(addr, addr+size); This appears to be a functional change: if (count <= VMAP_MAX_ALLOC) and we're in debug mode then the vunmap_page_range/flush_tlb_kernel_range will no longer be performed. Why is this ok? > if (likely(count <= VMAP_MAX_ALLOC)) { > vb_free(mem, size); > @@ -1499,7 +1481,6 @@ struct vm_struct *remove_vm_area(const void *addr) > va->flags |= VM_LAZY_FREE; > spin_unlock(&vmap_area_lock); > > - vmap_debug_free_range(va->va_start, va->va_end); > kasan_free_shadow(vm); > free_unmap_vmap_area(va); >