Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1174448AbdDXUW4 (ORCPT ); Mon, 24 Apr 2017 16:22:56 -0400 Received: from mx1.redhat.com ([209.132.183.28]:50590 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1030201AbdDXUWn (ORCPT ); Mon, 24 Apr 2017 16:22:43 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com B5DD07F4C0 Authentication-Results: ext-mx02.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx02.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=rkrcmar@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com B5DD07F4C0 Date: Mon, 24 Apr 2017 22:22:37 +0200 From: Radim =?utf-8?B?S3LEjW3DocWZ?= To: Paolo Bonzini Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Christoffer Dall , Marc Zyngier , Christian Borntraeger , Cornelia Huck , James Hogan , Paul Mackerras , Alexander Graf Subject: Re: [PATCH 3/4] KVM: add KVM_CREATE_VM2 system ioctl Message-ID: <20170424202237.GA7776@potion> References: <20170413201951.11939-1-rkrcmar@redhat.com> <20170413201951.11939-4-rkrcmar@redhat.com> <28b31907-80d8-3d5f-a3a3-0131621fa4c2@redhat.com> <20170424162255.GA5713@potion> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20170424162255.GA5713@potion> X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Mon, 24 Apr 2017 20:22:42 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1689 Lines: 37 2017-04-24 18:22+0200, Radim Krčmář: > 2017-04-18 16:30+0200, Paolo Bonzini: >> On 18/04/2017 16:16, Paolo Bonzini wrote: >>>> This patch allows userspace to tell how many VCPUs it is going to use, >>>> which can save memory when allocating the kvm->vcpus array. This will >>>> be done with a new KVM_CREATE_VM2 IOCTL. >>>> >>>> An alternative would be to redo kvm->vcpus as a list or protect the >>>> array with RCU. RCU is slower and a list is not even practical as >>>> kvm->vcpus are being used for index-based accesses. >>>> >>>> We could have an IOCTL that is called in between KVM_CREATE_VM and first >>>> KVM_CREATE_VCPU and sets the size of the vcpus array, but we'd be making >>>> one useless allocation. Knowing the desired number of VCPUs from the >>>> beginning is seems best for now. >>>> >>>> This patch also prepares generic code for architectures that will set >>>> KVM_CONFIGURABLE_MAX_VCPUS to a non-zero value. >>> Why is KVM_MAX_VCPU_ID or KVM_MAX_VCPUS not enough? >> >> Ok, for KVM_MAX_VCPUS I should have read the cover letter more carefully. :) > > KVM_MAX_VCPU_ID makes sense as the upper bound, I just didn't want to > mingle the concepts, because the kvm->vcpus array is not indexed by > VCPU_ID ... > > In hindsight, it would be best to change that and get rid of the search. > I'll see how that looks in v2. I realized why not: * the major user of kvm->vcpu is kvm_for_each_vcpu and it works best with a packed array * at least arm KVM_IRQ_LINE uses the order in which cpus were created to communicate with userspace Putting this work into a drawer with the "do not share data structure between kvm_for_each_vcpu and kvm_get_vcpu" idea. :)