Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751759AbdH2KAV (ORCPT ); Tue, 29 Aug 2017 06:00:21 -0400 Received: from mail-wm0-f49.google.com ([74.125.82.49]:34988 "EHLO mail-wm0-f49.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751514AbdH2KAT (ORCPT ); Tue, 29 Aug 2017 06:00:19 -0400 Date: Tue, 29 Aug 2017 12:00:16 +0200 From: Christoffer Dall To: Radim =?utf-8?B?S3LEjW3DocWZ?= Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-mips@linux-mips.org, kvm-ppc@vger.kernel.org, linux-s390@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Paolo Bonzini , David Hildenbrand , Marc Zyngier , Christian Borntraeger , Cornelia Huck , James Hogan , Paul Mackerras , Alexander Graf Subject: Re: [PATCH RFC v3 4/9] KVM: arm/arm64: use locking helpers in kvm_vgic_create() Message-ID: <20170829100016.GU24649@cbox> References: <20170821203530.9266-1-rkrcmar@redhat.com> <20170821203530.9266-5-rkrcmar@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20170821203530.9266-5-rkrcmar@redhat.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2957 Lines: 95 On Mon, Aug 21, 2017 at 10:35:25PM +0200, Radim Krčmář wrote: > No new VCPUs can be created because we are holding the kvm->lock. > This means that if we successfuly lock all VCPUs, we'll be unlocking the > same set and there is no need to do extra bookkeeping. > > Signed-off-by: Radim Krčmář > --- > virt/kvm/arm/vgic/vgic-init.c | 24 +++++++++--------------- > virt/kvm/arm/vgic/vgic-kvm-device.c | 6 +++++- > 2 files changed, 14 insertions(+), 16 deletions(-) > > diff --git a/virt/kvm/arm/vgic/vgic-init.c b/virt/kvm/arm/vgic/vgic-init.c > index 5801261f3add..feb766f74c34 100644 > --- a/virt/kvm/arm/vgic/vgic-init.c > +++ b/virt/kvm/arm/vgic/vgic-init.c > @@ -119,7 +119,7 @@ void kvm_vgic_vcpu_early_init(struct kvm_vcpu *vcpu) > */ > int kvm_vgic_create(struct kvm *kvm, u32 type) > { > - int i, vcpu_lock_idx = -1, ret; > + int i, ret; > struct kvm_vcpu *vcpu; > > if (irqchip_in_kernel(kvm)) > @@ -140,18 +140,14 @@ int kvm_vgic_create(struct kvm *kvm, u32 type) > * vcpu->mutex. By grabbing the vcpu->mutex of all VCPUs we ensure > * that no other VCPUs are run while we create the vgic. > */ > - ret = -EBUSY; > - kvm_for_each_vcpu(i, vcpu, kvm) { > - if (!mutex_trylock(&vcpu->mutex)) > - goto out_unlock; > - vcpu_lock_idx = i; > - } > + if (!lock_all_vcpus(kvm)) > + return -EBUSY; > > - kvm_for_each_vcpu(i, vcpu, kvm) { > - if (vcpu->arch.has_run_once) > + kvm_for_each_vcpu(i, vcpu, kvm) > + if (vcpu->arch.has_run_once) { > + ret = -EBUSY; > goto out_unlock; > - } > - ret = 0; > + } I also prefer the additional brace here. > > if (type == KVM_DEV_TYPE_ARM_VGIC_V2) > kvm->arch.max_vcpus = VGIC_V2_MAX_CPUS; > @@ -176,11 +172,9 @@ int kvm_vgic_create(struct kvm *kvm, u32 type) > kvm->arch.vgic.vgic_cpu_base = VGIC_ADDR_UNDEF; > kvm->arch.vgic.vgic_redist_base = VGIC_ADDR_UNDEF; > > + ret = 0; > out_unlock: > - for (; vcpu_lock_idx >= 0; vcpu_lock_idx--) { > - vcpu = kvm_get_vcpu(kvm, vcpu_lock_idx); > - mutex_unlock(&vcpu->mutex); > - } > + unlock_all_vcpus(kvm); > return ret; > } > > diff --git a/virt/kvm/arm/vgic/vgic-kvm-device.c b/virt/kvm/arm/vgic/vgic-kvm-device.c > index 10ae6f394b71..c5124737c7fc 100644 > --- a/virt/kvm/arm/vgic/vgic-kvm-device.c > +++ b/virt/kvm/arm/vgic/vgic-kvm-device.c > @@ -270,7 +270,11 @@ static void unlock_vcpus(struct kvm *kvm, int vcpu_lock_idx) > > void unlock_all_vcpus(struct kvm *kvm) > { > - unlock_vcpus(kvm, atomic_read(&kvm->online_vcpus) - 1); > + int i; > + struct kvm_vcpu *tmp_vcpu; > + > + kvm_for_each_vcpu(i, tmp_vcpu, kvm) > + mutex_unlock(&tmp_vcpu->mutex); > } > > /* Returns true if all vcpus were locked, false otherwise */ > -- > 2.13.3 > As noted on the other patch, it looks a bit strange to modify unlock_all_vcpus() here without also doing something about the error path in lock_all_vcpus(). Otherwise this patch looks fine to me. Thanks, -Christoffer