Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S973524AbdDXQXH (ORCPT ); Mon, 24 Apr 2017 12:23:07 -0400 Received: from mx1.redhat.com ([209.132.183.28]:40014 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S973433AbdDXQXA (ORCPT ); Mon, 24 Apr 2017 12:23:00 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 7E3557F4BE Authentication-Results: ext-mx02.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx02.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=rkrcmar@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 7E3557F4BE Date: Mon, 24 Apr 2017 18:22:55 +0200 From: Radim =?utf-8?B?S3LEjW3DocWZ?= To: Paolo Bonzini Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Christoffer Dall , Marc Zyngier , Christian Borntraeger , Cornelia Huck , James Hogan , Paul Mackerras , Alexander Graf Subject: Re: [PATCH 3/4] KVM: add KVM_CREATE_VM2 system ioctl Message-ID: <20170424162255.GA5713@potion> References: <20170413201951.11939-1-rkrcmar@redhat.com> <20170413201951.11939-4-rkrcmar@redhat.com> <28b31907-80d8-3d5f-a3a3-0131621fa4c2@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Mon, 24 Apr 2017 16:22:59 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1271 Lines: 26 2017-04-18 16:30+0200, Paolo Bonzini: > On 18/04/2017 16:16, Paolo Bonzini wrote: >>> This patch allows userspace to tell how many VCPUs it is going to use, >>> which can save memory when allocating the kvm->vcpus array. This will >>> be done with a new KVM_CREATE_VM2 IOCTL. >>> >>> An alternative would be to redo kvm->vcpus as a list or protect the >>> array with RCU. RCU is slower and a list is not even practical as >>> kvm->vcpus are being used for index-based accesses. >>> >>> We could have an IOCTL that is called in between KVM_CREATE_VM and first >>> KVM_CREATE_VCPU and sets the size of the vcpus array, but we'd be making >>> one useless allocation. Knowing the desired number of VCPUs from the >>> beginning is seems best for now. >>> >>> This patch also prepares generic code for architectures that will set >>> KVM_CONFIGURABLE_MAX_VCPUS to a non-zero value. >> Why is KVM_MAX_VCPU_ID or KVM_MAX_VCPUS not enough? > > Ok, for KVM_MAX_VCPUS I should have read the cover letter more carefully. :) KVM_MAX_VCPU_ID makes sense as the upper bound, I just didn't want to mingle the concepts, because the kvm->vcpus array is not indexed by VCPU_ID ... In hindsight, it would be best to change that and get rid of the search. I'll see how that looks in v2.