Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1167485AbdDXKMb (ORCPT ); Mon, 24 Apr 2017 06:12:31 -0400 Received: from foss.arm.com ([217.140.101.70]:54440 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1167332AbdDXKMR (ORCPT ); Mon, 24 Apr 2017 06:12:17 -0400 From: Suzuki K Poulose To: pbonzini@redhat.com Cc: christoffer.dall@linaro.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, marc.zyngier@arm.com, mark.rutland@arm.com, andreyknvl@google.com, rkrcmar@redhat.com, Suzuki K Poulose Subject: [PATCH 2/2] kvm: arm/arm64: Fix race in resetting stage2 PGD Date: Mon, 24 Apr 2017 11:10:24 +0100 Message-Id: <1493028624-29837-3-git-send-email-suzuki.poulose@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1493028624-29837-1-git-send-email-suzuki.poulose@arm.com> References: <1493028624-29837-1-git-send-email-suzuki.poulose@arm.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1585 Lines: 49 In kvm_free_stage2_pgd() we check the stage2 PGD before holding the lock and proceed to take the lock if it is valid. And we unmap the page tables, followed by releasing the lock. We reset the PGD only after dropping this lock, which could cause a race condition where another thread waiting on the lock could potentially see that the PGD is still valid and proceed to perform a stage2 operation. This patch moves the stage2 PGD manipulation under the lock. Reported-by: Alexander Graf Cc: Christoffer Dall Cc: Marc Zyngier Cc: Paolo Bonzini Signed-off-by: Suzuki K Poulose --- arch/arm/kvm/mmu.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 582a972..9c4026d 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -835,16 +835,18 @@ void stage2_unmap_vm(struct kvm *kvm) */ void kvm_free_stage2_pgd(struct kvm *kvm) { - if (kvm->arch.pgd == NULL) - return; + void *pgd = NULL; spin_lock(&kvm->mmu_lock); - unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE); + if (kvm->arch.pgd) { + unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE); + pgd = kvm->arch.pgd; + kvm->arch.pgd = NULL; + } spin_unlock(&kvm->mmu_lock); - /* Free the HW pgd, one page at a time */ - free_pages_exact(kvm->arch.pgd, S2_PGD_SIZE); - kvm->arch.pgd = NULL; + if (pgd) + free_pages_exact(pgd, S2_PGD_SIZE); } static pud_t *stage2_get_pud(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, -- 2.7.4