Received: by 2002:a17:90a:1609:0:0:0:0 with SMTP id n9csp845967pja; Wed, 1 Apr 2020 09:44:55 -0700 (PDT) X-Google-Smtp-Source: APiQypI+FDDTZyFLajMYsZUUT3IQnNmgbDtvLO8OhdUqbKDu7foMq9fj0Q163NsULxc8Xms3Y6t6 X-Received: by 2002:aca:5d04:: with SMTP id r4mr3360669oib.94.1585759495243; Wed, 01 Apr 2020 09:44:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1585759495; cv=none; d=google.com; s=arc-20160816; b=g85X0vGFu4VXAfuQ2VivcQRR/D41jaPtvs1+eDnGFLZJopbEFWceJ8lWSmI8Iz0NzE T+aoej1JvBZd+07121tGH7O1I0fxaInq3mI4lVkD5WL4q70YeJe9gWEioZ5B7zh3hhVH eZNhHM3M0FJfaecF+VHC6+x30ZHFSD0UcRHXfyaGP806yZkUruNXcIe4TCWim7qXAO4X APgc9l32CddTYl4rabNtLe+VlYElEGS0v8Bh/kWyHcVr91VDzHACyYmZSoQkWGMi/Q1P aMdxqHlUk/WgE0nL7m0RNKEpQl4Bx6rNrJisH42WDL74iNP3QbBttacb5TqsuF4HEh1N GuzQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=aOe2+ONCPIR7IDAAa6sqBcucIrq8YI2mfV/rmnaHmRU=; b=D86fPXL0s+ddIZ2vfCBu5mbvRmtO40e5hlutNGIrXPMEvK2vDPkytssYGX1lFiWvfQ 4YehHbQpO1CIBRgrpXYa/59xd23+mmUcg3vsJoMoo6y60yVrjFEttFRDRfjTmlX0VjKf 3Ec2Yc2CnpIE/tc76UhlxjSUlmJe11jSFxZgjBoaSwEsh1eQ98lDra9ZHfPsr2PjHTBc mqrwemjI8FD3K+rmWk8CWPT/p3uPIXP2KS5Tw1ZFuPbDCe5PNtrM1v1wOYLm73ZYCgIz MMywUX8z8Mg3EtNVNHKuDLS8Q3uxbWvVpbaQRHded7R7sxWMm81NZAZduKVIWokhSusy jsBA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=y5Rs7CVn; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 6si978748otv.76.2020.04.01.09.44.42; Wed, 01 Apr 2020 09:44:55 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=y5Rs7CVn; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389673AbgDAQn7 (ORCPT + 99 others); Wed, 1 Apr 2020 12:43:59 -0400 Received: from mail.kernel.org ([198.145.29.99]:44898 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389512AbgDAQn6 (ORCPT ); Wed, 1 Apr 2020 12:43:58 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 4B5E620719; Wed, 1 Apr 2020 16:43:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1585759437; bh=ygHdJhsO906lJGTJ8k6CFbim+V5vV93ekw5ZmmpzKPo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=y5Rs7CVnHBkp7Uc8KkuwqVIhyqfTCFf/VJMKER/dvdkIUwzdKe4Vdbjk8tpFIlC31 0fWaF8NuxTodGomQJYEOqHErW2jDFOIDbuyS9NfORODZFTBcV0N+LvNPKXCbtu8ivy epgx6sGQIbU6LivaICJJR3YhtVlLdIoyOBe9F494= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, kernel test robot , Shile Zhang , Joerg Roedel , Andrew Morton , Borislav Petkov , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Linus Torvalds , "Rafael J . Wysocki" Subject: [PATCH 4.14 039/148] x86/mm: split vmalloc_sync_all() Date: Wed, 1 Apr 2020 18:17:11 +0200 Message-Id: <20200401161556.531289960@linuxfoundation.org> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200401161552.245876366@linuxfoundation.org> References: <20200401161552.245876366@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Joerg Roedel commit 763802b53a427ed3cbd419dbba255c414fdd9e7c upstream. Commit 3f8fd02b1bf1 ("mm/vmalloc: Sync unmappings in __purge_vmap_area_lazy()") introduced a call to vmalloc_sync_all() in the vunmap() code-path. While this change was necessary to maintain correctness on x86-32-pae kernels, it also adds additional cycles for architectures that don't need it. Specifically on x86-64 with CONFIG_VMAP_STACK=y some people reported severe performance regressions in micro-benchmarks because it now also calls the x86-64 implementation of vmalloc_sync_all() on vunmap(). But the vmalloc_sync_all() implementation on x86-64 is only needed for newly created mappings. To avoid the unnecessary work on x86-64 and to gain the performance back, split up vmalloc_sync_all() into two functions: * vmalloc_sync_mappings(), and * vmalloc_sync_unmappings() Most call-sites to vmalloc_sync_all() only care about new mappings being synchronized. The only exception is the new call-site added in the above mentioned commit. Shile Zhang directed us to a report of an 80% regression in reaim throughput. Fixes: 3f8fd02b1bf1 ("mm/vmalloc: Sync unmappings in __purge_vmap_area_lazy()") Reported-by: kernel test robot Reported-by: Shile Zhang Signed-off-by: Joerg Roedel Signed-off-by: Andrew Morton Tested-by: Borislav Petkov Acked-by: Rafael J. Wysocki [GHES] Cc: Dave Hansen Cc: Andy Lutomirski Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Link: http://lkml.kernel.org/r/20191009124418.8286-1-joro@8bytes.org Link: https://lists.01.org/hyperkitty/list/lkp@lists.01.org/thread/4D3JPPHBNOSPFK2KEPC6KGKS6J25AIDB/ Link: http://lkml.kernel.org/r/20191113095530.228959-1-shile.zhang@linux.alibaba.com Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- arch/x86/mm/fault.c | 26 ++++++++++++++++++++++++-- drivers/acpi/apei/ghes.c | 2 +- include/linux/vmalloc.h | 5 +++-- kernel/notifier.c | 2 +- mm/nommu.c | 10 +++++++--- mm/vmalloc.c | 11 +++++++---- 6 files changed, 43 insertions(+), 13 deletions(-) --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -272,7 +272,7 @@ static inline pmd_t *vmalloc_sync_one(pg return pmd_k; } -void vmalloc_sync_all(void) +static void vmalloc_sync(void) { unsigned long address; @@ -299,6 +299,16 @@ void vmalloc_sync_all(void) } } +void vmalloc_sync_mappings(void) +{ + vmalloc_sync(); +} + +void vmalloc_sync_unmappings(void) +{ + vmalloc_sync(); +} + /* * 32-bit: * @@ -401,11 +411,23 @@ out: #else /* CONFIG_X86_64: */ -void vmalloc_sync_all(void) +void vmalloc_sync_mappings(void) { + /* + * 64-bit mappings might allocate new p4d/pud pages + * that need to be propagated to all tasks' PGDs. + */ sync_global_pgds(VMALLOC_START & PGDIR_MASK, VMALLOC_END); } +void vmalloc_sync_unmappings(void) +{ + /* + * Unmappings never allocate or free p4d/pud pages. + * No work is required here. + */ +} + /* * 64-bit: * --- a/drivers/acpi/apei/ghes.c +++ b/drivers/acpi/apei/ghes.c @@ -201,7 +201,7 @@ static int ghes_estatus_pool_expand(unsi * New allocation must be visible in all pgd before it can be found by * an NMI allocating from the pool. */ - vmalloc_sync_all(); + vmalloc_sync_mappings(); return gen_pool_add(ghes_estatus_pool, addr, PAGE_ALIGN(len), -1); } --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -106,8 +106,9 @@ extern int remap_vmalloc_range_partial(s extern int remap_vmalloc_range(struct vm_area_struct *vma, void *addr, unsigned long pgoff); -void vmalloc_sync_all(void); - +void vmalloc_sync_mappings(void); +void vmalloc_sync_unmappings(void); + /* * Lowlevel-APIs (not for driver use!) */ --- a/kernel/notifier.c +++ b/kernel/notifier.c @@ -552,7 +552,7 @@ NOKPROBE_SYMBOL(notify_die); int register_die_notifier(struct notifier_block *nb) { - vmalloc_sync_all(); + vmalloc_sync_mappings(); return atomic_notifier_chain_register(&die_chain, nb); } EXPORT_SYMBOL_GPL(register_die_notifier); --- a/mm/nommu.c +++ b/mm/nommu.c @@ -450,10 +450,14 @@ void vm_unmap_aliases(void) EXPORT_SYMBOL_GPL(vm_unmap_aliases); /* - * Implement a stub for vmalloc_sync_all() if the architecture chose not to - * have one. + * Implement a stub for vmalloc_sync_[un]mapping() if the architecture + * chose not to have one. */ -void __weak vmalloc_sync_all(void) +void __weak vmalloc_sync_mappings(void) +{ +} + +void __weak vmalloc_sync_unmappings(void) { } --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1769,7 +1769,7 @@ void *__vmalloc_node_range(unsigned long * First make sure the mappings are removed from all page-tables * before they are freed. */ - vmalloc_sync_all(); + vmalloc_sync_unmappings(); /* * In this function, newly allocated vm_struct has VM_UNINITIALIZED @@ -2318,16 +2318,19 @@ int remap_vmalloc_range(struct vm_area_s EXPORT_SYMBOL(remap_vmalloc_range); /* - * Implement a stub for vmalloc_sync_all() if the architecture chose not to - * have one. + * Implement stubs for vmalloc_sync_[un]mappings () if the architecture chose + * not to have one. * * The purpose of this function is to make sure the vmalloc area * mappings are identical in all page-tables in the system. */ -void __weak vmalloc_sync_all(void) +void __weak vmalloc_sync_mappings(void) { } +void __weak vmalloc_sync_unmappings(void) +{ +} static int f(pte_t *pte, pgtable_t table, unsigned long addr, void *data) {