Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752443AbcDWWyS (ORCPT ); Sat, 23 Apr 2016 18:54:18 -0400 Received: from mail-oi0-f66.google.com ([209.85.218.66]:36499 "EHLO mail-oi0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751986AbcDWWyQ convert rfc822-to-8bit (ORCPT ); Sat, 23 Apr 2016 18:54:16 -0400 MIME-Version: 1.0 In-Reply-To: <20160422130705.GD7202@potion> References: <146116689259.20666.15860134511726195550.stgit@bahia.huguette.org> <20160420182909.GB4044@potion> <20160421132958.0e9292d5@bahia.huguette.org> <20160421152916.GA30356@potion> <20160422130705.GD7202@potion> Date: Sun, 24 Apr 2016 06:54:15 +0800 Message-ID: Subject: Re: [PATCH v3] KVM: remove buggy vcpu id check on vcpu creation From: Wanpeng Li To: =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= Cc: Greg Kurz , Paolo Bonzini , james.hogan@imgtec.com, Ingo Molnar , linux-mips@linux-mips.org, kvm , "linux-kernel@vger.kernel.org" , qemu-ppc@nongnu.org, Cornelia Huck , Paul Mackerras , David Gibson Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8BIT Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1094 Lines: 26 2016-04-22 21:07 GMT+08:00 Radim Krčmář : > 2016-04-22 09:40+0800, Wanpeng Li: >> 2016-04-21 23:29 GMT+08:00 Radim Krčmář : >>> x86 vcpu_id encodes APIC ID and APIC ID encodes CPU topology by >>> reserving blocks of bits for socket/core/thread, so if core or thread >>> count isn't a power of two, then the set of valid APIC IDs is sparse, >> >> ^^^^^^^^^^^^^^^^^^^ >> ^^^^^^^ >> Is this the root reason why recommand max vCPUs per vm is 160 and the >> KVM_MAX_VCPUS is 255 instead of due to perforamnce concern? > > No, the recommended amout of VCPUs is 160 because I didn't bump it after > PLE stopped killing big guests. :/ > > You can get full 255 VCPU guest with a proper configuration, e.g. > "-smp 255" or "-smp 255,cores=8" and the only problem is scalability, > but I don't know of anything that doesn't scale to that point. > > (Scaling up to 2^32 is harder, because you don't want O(N) search, nor > full allocation on smaller guests. Neither is a big problem now.) I see, thanks Radim. Regards, Wanpeng Li