Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp2912108imm; Mon, 13 Aug 2018 02:42:27 -0700 (PDT) X-Google-Smtp-Source: AA+uWPyOuRYxkhxRZ6ExgDSOHJwHzRI+d6DxaRmx7LYZcBE6xZrnWKXZceuRTAFZ5IwOtb3KW+Gw X-Received: by 2002:a63:fa18:: with SMTP id y24-v6mr16514636pgh.362.1534153347881; Mon, 13 Aug 2018 02:42:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1534153347; cv=none; d=google.com; s=arc-20160816; b=bgZSuhzAXPAHVGSCsYOEfcJc0ntX3l87atf9nSsVzKWwR62MbV25RUnJ+cm9sXmvTZ riO5hXfU8RMgw6WfUQ4IcYhtKx8ogfRVFMfJi+UEUVWssh6jdSqEF+br+GuE+p5azKFD s52L6+Dja/tWkiQxu4BEv4lvSZjPX7Rlh0zjI2LM7J6yeCiM5vT9ZWTOWAbz04qsyFCL GtUagmqnAaJx+DjnxEBDnIAcweQPvykjmCsyhz/jw9Wvx/rOIWZ+xb9UFlyJc2tvgwDd GytAMLRhqMA4MhRIbDtR9OnPiSmeoYTgH6ouhTM9uxLwsvw1upGxYaDupv8/P1VJBTRk Tzog== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=sVOIhJizsE38U0iz+C1ciiR8bBKK/ZiCm3Z0n4sCyyg=; b=BMM9f+fPW13Vx9Z8arK1y6jYNk49x1TDqdkzKHJPY4xGHVJoPzi8LdCGfas53Tnlzg i339JKgc+Kzg9QlZp74BLOmS8TG5hxyiqSVXw8WGUPmN3ln5UlkOzUJYlIW4EDp073yd xCkPThMxIDKZ4pjZKeiW85Hs3oI33PhNDMId6bAb9T2MoYPOp2W2UpZ4gQ7Dlehwuzn2 yVQItokW/6msEIupRF2B4v6rDNJmB2PwFEAfyyL7RJpx1QgUbnwN2/MDZShWbwFlFTdu ILMb4F/wpDWe92t+kWWXxzIk0TVO4Z5KTxEt9yJ8YzuJlf8SBW2CPct0i8FZYCNi7foU uB8g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e66-v6si18108731pfk.198.2018.08.13.02.42.13; Mon, 13 Aug 2018 02:42:27 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729019AbeHMMWq (ORCPT + 99 others); Mon, 13 Aug 2018 08:22:46 -0400 Received: from foss.arm.com ([217.140.101.70]:56046 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728313AbeHMMWq (ORCPT ); Mon, 13 Aug 2018 08:22:46 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5A70915A2; Mon, 13 Aug 2018 02:41:18 -0700 (PDT) Received: from localhost (e105922-lin.emea.arm.com [10.4.13.28]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CFA273F73C; Mon, 13 Aug 2018 02:41:17 -0700 (PDT) From: Punit Agrawal To: kvmarm@lists.cs.columbia.edu Cc: Punit Agrawal , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, suzuki.poulose@arm.com, marc.zyngier@arm.com, christoffer.dall@arm.com, stable@vger.kernel.org Subject: [PATCH v2 1/2] KVM: arm/arm64: Skip updating PMD entry if no change Date: Mon, 13 Aug 2018 10:40:48 +0100 Message-Id: <20180813094049.3726-2-punit.agrawal@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813094049.3726-1-punit.agrawal@arm.com> References: <20180813094049.3726-1-punit.agrawal@arm.com> X-ARM-No-Footer: FoSSMail Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Contention on updating a PMD entry by a large number of vcpus can lead to duplicate work when handling stage 2 page faults. As the page table update follows the break-before-make requirement of the architecture, it can lead to repeated refaults due to clearing the entry and flushing the tlbs. This problem is more likely when - * there are large number of vcpus * the mapping is large block mapping such as when using PMD hugepages (512MB) with 64k pages. Fix this by skipping the page table update if there is no change in the entry being updated. Fixes: ad361f093c1e ("KVM: ARM: Support hugetlbfs backed huge pages") Change-Id: Ib417957c842ef67a6f4b786f68df62048d202c24 Signed-off-by: Punit Agrawal Cc: Marc Zyngier Cc: Christoffer Dall Cc: Suzuki Poulose Cc: stable@vger.kernel.org --- virt/kvm/arm/mmu.c | 40 +++++++++++++++++++++++++++++----------- 1 file changed, 29 insertions(+), 11 deletions(-) diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index 1d90d79706bd..2ab977edc63c 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -1015,19 +1015,36 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache pmd = stage2_get_pmd(kvm, cache, addr); VM_BUG_ON(!pmd); - /* - * Mapping in huge pages should only happen through a fault. If a - * page is merged into a transparent huge page, the individual - * subpages of that huge page should be unmapped through MMU - * notifiers before we get here. - * - * Merging of CompoundPages is not supported; they should become - * splitting first, unmapped, merged, and mapped back in on-demand. - */ - VM_BUG_ON(pmd_present(*pmd) && pmd_pfn(*pmd) != pmd_pfn(*new_pmd)); - old_pmd = *pmd; + if (pmd_present(old_pmd)) { + /* + * Mapping in huge pages should only happen through a + * fault. If a page is merged into a transparent huge + * page, the individual subpages of that huge page + * should be unmapped through MMU notifiers before we + * get here. + * + * Merging of CompoundPages is not supported; they + * should become splitting first, unmapped, merged, + * and mapped back in on-demand. + */ + VM_BUG_ON(pmd_pfn(old_pmd) != pmd_pfn(*new_pmd)); + + /* + * Multiple vcpus faulting on the same PMD entry, can + * lead to them sequentially updating the PMD with the + * same value. Following the break-before-make + * (pmd_clear() followed by tlb_flush()) process can + * hinder forward progress due to refaults generated + * on missing translations. + * + * Skip updating the page table if the entry is + * unchanged. + */ + if (pmd_val(old_pmd) == pmd_val(*new_pmd)) + goto out; + pmd_clear(pmd); kvm_tlb_flush_vmid_ipa(kvm, addr); } else { @@ -1035,6 +1052,7 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache } kvm_set_pmd(pmd, *new_pmd); +out: return 0; } -- 2.18.0