Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753858AbdHKTgA (ORCPT ); Fri, 11 Aug 2017 15:36:00 -0400 Received: from aserp1040.oracle.com ([141.146.126.69]:50586 "EHLO aserp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753788AbdHKTf6 (ORCPT ); Fri, 11 Aug 2017 15:35:58 -0400 Date: Fri, 11 Aug 2017 15:35:31 -0400 From: Konrad Rzeszutek Wilk To: Radim =?utf-8?B?S3LEjW3DocWZ?= Cc: David Hildenbrand , Lan Tianyu , pbonzini@redhat.com, tglx@linutronix.de, mingo@redhat.com, hpa@zytor.com, x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH] KVM/x86: Increase max vcpu number to 352 Message-ID: <20170811193531.GM32249@dhcp-amer-vpn-adc-anyconnect-10-154-152-169.vpn.oracle.com> References: <1502359259-24966-1-git-send-email-tianyu.lan@intel.com> <20170810175056.GR2547@char.us.oracle.com> <23159a7e-463a-2a5b-5aaa-ef7f0eb43547@intel.com> <20170811130020.GB28649@flask> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20170811130020.GB28649@flask> User-Agent: Mutt/1.8.3 (2017-05-23) X-Source-IP: aserv0021.oracle.com [141.146.126.233] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3036 Lines: 77 On Fri, Aug 11, 2017 at 03:00:20PM +0200, Radim Krčmář wrote: > 2017-08-11 10:11+0200, David Hildenbrand: > > On 11.08.2017 09:49, Lan Tianyu wrote: > >> Hi Konrad: > >> Thanks for your review. > >> > >> On 2017年08月11日 01:50, Konrad Rzeszutek Wilk wrote: > >>> On Thu, Aug 10, 2017 at 06:00:59PM +0800, Lan Tianyu wrote: > >>>> Intel Xeon phi chip will support 352 logical threads. For HPC usage > >>>> case, it will create a huge VM with vcpu number as same as host cpus. This > >>>> patch is to increase max vcpu number to 352. > >>> > >>> Why not 1024 or 4096? > >> > >> This is on demand. We can set a higher number since KVM already has > >> x2apic and vIOMMU interrupt remapping support. > >> > >>> > >>> Are there any issues with increasing the value from 288 to 352 right now? > >> > >> No found. > > Yeah, the only issue until around 2^20 (when we reach the maximum of > logical x2APIC addressing) should be the size of per-VM arrays when only > few VCPUs are going to be used. Migration with 352 CPUs all being busy dirtying memory and also poking at various I/O ports (say all of them dirtying the VGA) is no problem? > > >>> Also perhaps this should be made in an Kconfig entry? > >> > >> That will be anther option but I find different platforms will define > >> different MAX_VCPU. If we introduce a generic Kconfig entry, different > >> platforms should have different range. By different platforms you mean q35 vs the older one, and such? Not whether the underlaying accelerator is tcg, Xen, KVM, or bHyve? What I was trying to understand whether it makes even sense for the platforms to have such limits in the first place - and instead the accelerators should be the ones setting it? > >> > >> Radim & Paolo, Could you give some input? In qemu thread, we will set > >> max vcpu to 8192 for x86 VM. In KVM, The length of vcpu pointer array in > >> struct kvm and dest_vcpu_bitmap in kvm_irq_delivery_to_apic() are > >> specified by KVM_MAX_VCPUS. Should we keep align with Qemu? > > That would be great. > > > commit 682f732ecf7396e9d6fe24d44738966699fae6c0 > > Author: Radim Krčmář > > Date: Tue Jul 12 22:09:29 2016 +0200 > > > > KVM: x86: bump MAX_VCPUS to 288 > > > > 288 is in high demand because of Knights Landing CPU. > > We cannot set the limit to 640k, because that would be wasting space. > > > > I think we want to keep it small as long as possible. I remember a patch > > series from Radim which would dynamically allocate memory for these > > arrays (using a new VM creation ioctl, specifying the max # of vcpus). > > Wonder what happened to that (I remember requesting a simply remalloc > > instead of a new VM creation ioctl :] ). > > Eh, I forgot about them ... I didn't like the dynamic allocation as we > would need to protect the memory, which would result in a much bigger > changeset, or fragile macros. > > I can't recall the disgust now, so I'll send a RFC with the dynamic > version to see how it turned out. > > Thanks.