Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756143AbdDRLME (ORCPT ); Tue, 18 Apr 2017 07:12:04 -0400 Received: from mx1.redhat.com ([209.132.183.28]:47352 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752179AbdDRLMA (ORCPT ); Tue, 18 Apr 2017 07:12:00 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 9968085376 Authentication-Results: ext-mx01.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx01.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=david@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 9968085376 Subject: Re: [PATCH 0/4] KVM: add KVM_CREATE_VM2 to allow dynamic kvm->vcpus array To: =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , linux-kernel@vger.kernel.org, kvm@vger.kernel.org References: <20170413201951.11939-1-rkrcmar@redhat.com> Cc: Christoffer Dall , Marc Zyngier , Paolo Bonzini , Christian Borntraeger , Cornelia Huck , James Hogan , Paul Mackerras , Alexander Graf From: David Hildenbrand Organization: Red Hat GmbH Message-ID: Date: Tue, 18 Apr 2017 13:11:55 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0 MIME-Version: 1.0 In-Reply-To: <20170413201951.11939-1-rkrcmar@redhat.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Tue, 18 Apr 2017 11:12:00 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1593 Lines: 46 On 13.04.2017 22:19, Radim Krčmář wrote: > The basic idea is to let userspace provide the desired maximal number of > VCPUs and allocate only necessary memory for them. > > The goal is to freeze KVM_MAX_VCPUS at its current level and only increase the KVM_MAX_VCPUS might still increase e.g. if hw support for more VCPUs is comming. > new KVM_MAX_CONFIGURABLE_VCPUS, probably directly to INT_MAX/KVM_VCPU_ID, so we > don't have to worry about it for a while. > > PPC should be interested in this as they set KVM_MAX_VCPUS to NR_CPUS > and probably waste few pages for every guest this way. As we just store pointers, this should be a maximum of 4 pages for ppc (4k pages). Is this really worth yet another VM creation ioctl? Is there not a nicer way to handle this internally? An alternative might be to simply realloc the array when it reaches a certain size (on VCPU creation, maybe protecting the pointer via rcu). But not sure if something like that could work. > > > Radim Krčmář (4): > KVM: remove unused __KVM_HAVE_ARCH_VM_ALLOC > KVM: allocate kvm->vcpus separately > KVM: add KVM_CREATE_VM2 system ioctl > KVM: x86: enable configurable MAX_VCPU > > Documentation/virtual/kvm/api.txt | 28 +++++++++++++++ > arch/x86/include/asm/kvm_host.h | 1 + > arch/x86/kvm/irq_comm.c | 4 +-- > include/linux/kvm_host.h | 23 +++++------- > include/uapi/linux/kvm.h | 8 +++++ > virt/kvm/kvm_main.c | 76 +++++++++++++++++++++++++++++++++------ > 6 files changed, 114 insertions(+), 26 deletions(-) > -- Thanks, David