Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751366AbdGQPMO (ORCPT ); Mon, 17 Jul 2017 11:12:14 -0400 Received: from foss.arm.com ([217.140.101.70]:41164 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751280AbdGQPMM (ORCPT ); Mon, 17 Jul 2017 11:12:12 -0400 Subject: Re: [PATCH 3.16 121/178] kvm: arm/arm64: Fix locking for kvm_free_stage2_pgd To: Ben Hutchings , linux-kernel@vger.kernel.org, stable@vger.kernel.org References: Cc: akpm@linux-foundation.org, Mark Rutland , Paolo Bonzini , Christoffer Dall , Marc Zyngier , Christoffer Dall From: Suzuki K Poulose Message-ID: <952e43a7-0073-b068-5ca8-03ec6bb8a363@arm.com> Date: Mon, 17 Jul 2017 16:12:08 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2950 Lines: 87 On 16/07/17 14:56, Ben Hutchings wrote: > 3.16.46-rc1 review patch. If anyone has any objections, please let me know. > > ------------------ > > From: Suzuki K Poulose > > commit 8b3405e345b5a098101b0c31b264c812bba045d9 upstream. > > In kvm_free_stage2_pgd() we don't hold the kvm->mmu_lock while calling > unmap_stage2_range() on the entire memory range for the guest. This could > cause problems with other callers (e.g, munmap on a memslot) trying to > unmap a range. And since we have to unmap the entire Guest memory range > holding a spinlock, make sure we yield the lock if necessary, after we > unmap each PUD range. > > Fixes: commit d5d8184d35c9 ("KVM: ARM: Memory virtualization setup") > Cc: Paolo Bonzini > Cc: Marc Zyngier > Cc: Christoffer Dall > Cc: Mark Rutland > Signed-off-by: Suzuki K Poulose > [ Avoid vCPU starvation and lockup detector warnings ] > Signed-off-by: Marc Zyngier > Signed-off-by: Suzuki K Poulose > Signed-off-by: Christoffer Dall > [bwh: Backported to 3.16: > - unmap_stage2_range() is a wrapper around unmap_range(), which is also used for > HYP page table setup. So unmap_range() should do the cond_resched_lock(), but > only if kvm != NULL. > - Adjust context] > Signed-off-by: Ben Hutchings > --- > arch/arm/kvm/mmu.c | 10 ++++++++++ > 1 file changed, 10 insertions(+) > > --- a/arch/arm/kvm/mmu.c > +++ b/arch/arm/kvm/mmu.c > @@ -199,6 +199,12 @@ static void unmap_range(struct kvm *kvm, > next = kvm_pgd_addr_end(addr, end); > if (!pgd_none(*pgd)) > unmap_puds(kvm, pgd, addr, next); > + /* > + * If the range is too large, release the kvm->mmu_lock > + * to prevent starvation and lockup detector warnings. > + */ > + if (kvm && next != end) > + cond_resched_lock(&kvm->mmu_lock); > } while (pgd++, addr = next, addr != end); > } > > @@ -553,6 +559,7 @@ int kvm_alloc_stage2_pgd(struct kvm *kvm > */ > static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size) > { > + assert_spin_locked(&kvm->mmu_lock); > unmap_range(kvm, kvm->arch.pgd, start, size); > } > > @@ -637,7 +644,10 @@ void kvm_free_stage2_pgd(struct kvm *kvm > if (kvm->arch.pgd == NULL) > return; > > + spin_lock(&kvm->mmu_lock); > unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE); > + spin_unlock(&kvm->mmu_lock); > + > free_pages((unsigned long)kvm->arch.pgd, S2_PGD_ORDER); > kvm->arch.pgd = NULL; > } Hi Ben, There is a follow up patch for this one to make sure we check/free the PGD under the mmu_lock. commit 6c0d706b563af73 ("kvm: arm/arm64: Fix race in resetting stage2 PGD") So unless you want to fold that in, this looks alright. Looks like we missed a Cc: stable for that. I will send it to stable soon. Cheers Suzuki >