Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932724AbdHVLvy (ORCPT ); Tue, 22 Aug 2017 07:51:54 -0400 Received: from mx1.redhat.com ([209.132.183.28]:38258 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932516AbdHVLvv (ORCPT ); Tue, 22 Aug 2017 07:51:51 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 7F03EC05680C Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=david@redhat.com Subject: Re: [PATCH RFC v3 4/9] KVM: arm/arm64: use locking helpers in kvm_vgic_create() To: =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-mips@linux-mips.org, kvm-ppc@vger.kernel.org, linux-s390@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: Paolo Bonzini , Christoffer Dall , Marc Zyngier , Christian Borntraeger , Cornelia Huck , James Hogan , Paul Mackerras , Alexander Graf References: <20170821203530.9266-1-rkrcmar@redhat.com> <20170821203530.9266-5-rkrcmar@redhat.com> From: David Hildenbrand Organization: Red Hat GmbH Message-ID: <1ef9ab09-b998-b0c9-86e3-7fd2234418fa@redhat.com> Date: Tue, 22 Aug 2017 13:51:47 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.2.1 MIME-Version: 1.0 In-Reply-To: <20170821203530.9266-5-rkrcmar@redhat.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Tue, 22 Aug 2017 11:51:51 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2757 Lines: 95 On 21.08.2017 22:35, Radim Krčmář wrote: > No new VCPUs can be created because we are holding the kvm->lock. > This means that if we successfuly lock all VCPUs, we'll be unlocking the > same set and there is no need to do extra bookkeeping. > > Signed-off-by: Radim Krčmář > --- > virt/kvm/arm/vgic/vgic-init.c | 24 +++++++++--------------- > virt/kvm/arm/vgic/vgic-kvm-device.c | 6 +++++- > 2 files changed, 14 insertions(+), 16 deletions(-) > > diff --git a/virt/kvm/arm/vgic/vgic-init.c b/virt/kvm/arm/vgic/vgic-init.c > index 5801261f3add..feb766f74c34 100644 > --- a/virt/kvm/arm/vgic/vgic-init.c > +++ b/virt/kvm/arm/vgic/vgic-init.c > @@ -119,7 +119,7 @@ void kvm_vgic_vcpu_early_init(struct kvm_vcpu *vcpu) > */ > int kvm_vgic_create(struct kvm *kvm, u32 type) > { > - int i, vcpu_lock_idx = -1, ret; > + int i, ret; > struct kvm_vcpu *vcpu; > > if (irqchip_in_kernel(kvm)) > @@ -140,18 +140,14 @@ int kvm_vgic_create(struct kvm *kvm, u32 type) > * vcpu->mutex. By grabbing the vcpu->mutex of all VCPUs we ensure > * that no other VCPUs are run while we create the vgic. > */ > - ret = -EBUSY; > - kvm_for_each_vcpu(i, vcpu, kvm) { > - if (!mutex_trylock(&vcpu->mutex)) > - goto out_unlock; > - vcpu_lock_idx = i; > - } > + if (!lock_all_vcpus(kvm)) > + return -EBUSY; Yes, this makes sense. > > - kvm_for_each_vcpu(i, vcpu, kvm) { > - if (vcpu->arch.has_run_once) > + kvm_for_each_vcpu(i, vcpu, kvm) > + if (vcpu->arch.has_run_once) { > + ret = -EBUSY; > goto out_unlock; > - } > - ret = 0; > + } somehow I prefer keeping the {} > > if (type == KVM_DEV_TYPE_ARM_VGIC_V2) > kvm->arch.max_vcpus = VGIC_V2_MAX_CPUS; > @@ -176,11 +172,9 @@ int kvm_vgic_create(struct kvm *kvm, u32 type) > kvm->arch.vgic.vgic_cpu_base = VGIC_ADDR_UNDEF; > kvm->arch.vgic.vgic_redist_base = VGIC_ADDR_UNDEF; > > + ret = 0; > out_unlock: > - for (; vcpu_lock_idx >= 0; vcpu_lock_idx--) { > - vcpu = kvm_get_vcpu(kvm, vcpu_lock_idx); > - mutex_unlock(&vcpu->mutex); > - } > + unlock_all_vcpus(kvm); > return ret; > } > > diff --git a/virt/kvm/arm/vgic/vgic-kvm-device.c b/virt/kvm/arm/vgic/vgic-kvm-device.c > index 10ae6f394b71..c5124737c7fc 100644 > --- a/virt/kvm/arm/vgic/vgic-kvm-device.c > +++ b/virt/kvm/arm/vgic/vgic-kvm-device.c > @@ -270,7 +270,11 @@ static void unlock_vcpus(struct kvm *kvm, int vcpu_lock_idx) > > void unlock_all_vcpus(struct kvm *kvm) > { > - unlock_vcpus(kvm, atomic_read(&kvm->online_vcpus) - 1); > + int i; > + struct kvm_vcpu *tmp_vcpu; > + > + kvm_for_each_vcpu(i, tmp_vcpu, kvm) > + mutex_unlock(&tmp_vcpu->mutex); > } > > /* Returns true if all vcpus were locked, false otherwise */ > Looks sane to me. -- Thanks, David